Jan 23 23:58:01.200144 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:58:01.200165 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:58:01.200173 kernel: KASLR enabled Jan 23 23:58:01.200179 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:58:01.200186 kernel: printk: bootconsole [pl11] enabled Jan 23 23:58:01.200191 kernel: efi: EFI v2.7 by EDK II Jan 23 23:58:01.200199 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:58:01.200205 kernel: random: crng init done Jan 23 23:58:01.200211 kernel: ACPI: Early table checksum verification disabled Jan 23 23:58:01.200217 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:58:01.200223 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200229 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200236 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:58:01.200242 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200250 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200256 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200263 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200270 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200276 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200283 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:58:01.200289 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200296 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:58:01.200302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:58:01.200308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:58:01.200314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:58:01.200321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:58:01.200327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:58:01.200333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:58:01.200341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:58:01.200347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:58:01.200354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:58:01.200360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:58:01.200366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:58:01.200373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:58:01.200379 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 23 23:58:01.200385 kernel: Zone ranges: Jan 23 23:58:01.200392 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:58:01.200398 kernel: DMA32 empty Jan 23 23:58:01.200404 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:58:01.200410 kernel: Movable zone start for each node Jan 23 23:58:01.200420 kernel: Early memory node ranges Jan 23 23:58:01.200427 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:58:01.200434 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:58:01.200440 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:58:01.200447 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:58:01.200455 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:58:01.200462 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:58:01.200468 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:58:01.200475 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:58:01.200482 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:58:01.200489 kernel: psci: probing for conduit method from ACPI. Jan 23 23:58:01.200496 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:58:01.200502 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:58:01.200509 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:58:01.200516 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:58:01.200522 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:58:01.200529 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:58:01.200537 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:58:01.200544 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:58:01.200551 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:58:01.200557 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:58:01.200564 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:58:01.200571 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:58:01.200578 kernel: CPU features: detected: Spectre-BHB Jan 23 23:58:01.200584 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:58:01.201631 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:58:01.201640 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:58:01.201647 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:58:01.201657 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:58:01.201664 kernel: alternatives: applying boot alternatives Jan 23 23:58:01.201672 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:58:01.201680 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:58:01.201687 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:58:01.201693 kernel: Fallback order for Node 0: 0 Jan 23 23:58:01.201700 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:58:01.201708 kernel: Policy zone: Normal Jan 23 23:58:01.201715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:58:01.201722 kernel: software IO TLB: area num 2. Jan 23 23:58:01.201729 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:58:01.201737 kernel: Memory: 3982632K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211528K reserved, 0K cma-reserved) Jan 23 23:58:01.201744 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:58:01.201751 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:58:01.201758 kernel: rcu: RCU event tracing is enabled. Jan 23 23:58:01.201765 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:58:01.201772 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:58:01.201779 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:58:01.201786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:58:01.201793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:58:01.201800 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:58:01.201806 kernel: GICv3: 960 SPIs implemented Jan 23 23:58:01.201814 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:58:01.201821 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:58:01.201828 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:58:01.201834 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:58:01.201841 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:58:01.201848 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:58:01.201855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:58:01.201862 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:58:01.201869 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:58:01.201876 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:58:01.201883 kernel: Console: colour dummy device 80x25 Jan 23 23:58:01.201891 kernel: printk: console [tty1] enabled Jan 23 23:58:01.201898 kernel: ACPI: Core revision 20230628 Jan 23 23:58:01.201905 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:58:01.201912 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:58:01.201919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:58:01.201926 kernel: landlock: Up and running. Jan 23 23:58:01.201933 kernel: SELinux: Initializing. Jan 23 23:58:01.201940 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.201947 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.201955 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:58:01.201962 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:58:01.201969 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:58:01.201976 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:58:01.201983 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:58:01.201990 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:58:01.201997 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:58:01.202004 kernel: Remapping and enabling EFI services. Jan 23 23:58:01.202017 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:58:01.202025 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:58:01.202032 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:58:01.202039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:58:01.202048 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:58:01.202055 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:58:01.202063 kernel: SMP: Total of 2 processors activated. Jan 23 23:58:01.202070 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:58:01.202078 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:58:01.202087 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:58:01.202094 kernel: CPU features: detected: CRC32 instructions Jan 23 23:58:01.202101 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:58:01.202109 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:58:01.202116 kernel: CPU features: detected: Privileged Access Never Jan 23 23:58:01.202123 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:58:01.202130 kernel: alternatives: applying system-wide alternatives Jan 23 23:58:01.202138 kernel: devtmpfs: initialized Jan 23 23:58:01.202145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:58:01.202154 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:58:01.202161 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:58:01.202168 kernel: SMBIOS 3.1.0 present. Jan 23 23:58:01.202176 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:58:01.202184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:58:01.202191 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:58:01.202198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:58:01.202206 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:58:01.202213 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:58:01.202222 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:58:01.202229 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:58:01.202237 kernel: cpuidle: using governor menu Jan 23 23:58:01.202244 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:58:01.202252 kernel: ASID allocator initialised with 32768 entries Jan 23 23:58:01.202259 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:58:01.202266 kernel: Serial: AMBA PL011 UART driver Jan 23 23:58:01.202273 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:58:01.202281 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:58:01.202290 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:58:01.202297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:58:01.202305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:58:01.202312 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:58:01.202319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:58:01.202327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:58:01.202334 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:58:01.202341 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:58:01.202348 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:58:01.202357 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:58:01.202365 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:58:01.202372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:58:01.202379 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:58:01.202386 kernel: ACPI: Interpreter enabled Jan 23 23:58:01.202394 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:58:01.202401 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:58:01.202408 kernel: printk: console [ttyAMA0] enabled Jan 23 23:58:01.202416 kernel: printk: bootconsole [pl11] disabled Jan 23 23:58:01.202424 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:58:01.202432 kernel: iommu: Default domain type: Translated Jan 23 23:58:01.202439 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:58:01.202446 kernel: efivars: Registered efivars operations Jan 23 23:58:01.202453 kernel: vgaarb: loaded Jan 23 23:58:01.202461 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:58:01.202468 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:58:01.202475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:58:01.202482 kernel: pnp: PnP ACPI init Jan 23 23:58:01.202491 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:58:01.202498 kernel: NET: Registered PF_INET protocol family Jan 23 23:58:01.202505 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:58:01.202513 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:58:01.202520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:58:01.202528 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:58:01.202535 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:58:01.202542 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:58:01.202550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.202558 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.202566 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:58:01.202573 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:58:01.202581 kernel: kvm [1]: HYP mode not available Jan 23 23:58:01.203613 kernel: Initialise system trusted keyrings Jan 23 23:58:01.203626 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:58:01.203634 kernel: Key type asymmetric registered Jan 23 23:58:01.203641 kernel: Asymmetric key parser 'x509' registered Jan 23 23:58:01.203648 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:58:01.203660 kernel: io scheduler mq-deadline registered Jan 23 23:58:01.203667 kernel: io scheduler kyber registered Jan 23 23:58:01.203675 kernel: io scheduler bfq registered Jan 23 23:58:01.203682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:58:01.203689 kernel: thunder_xcv, ver 1.0 Jan 23 23:58:01.203696 kernel: thunder_bgx, ver 1.0 Jan 23 23:58:01.203704 kernel: nicpf, ver 1.0 Jan 23 23:58:01.203711 kernel: nicvf, ver 1.0 Jan 23 23:58:01.203843 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:58:01.203916 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:58:00 UTC (1769212680) Jan 23 23:58:01.203926 kernel: efifb: probing for efifb Jan 23 23:58:01.203934 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:58:01.203941 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:58:01.203949 kernel: efifb: scrolling: redraw Jan 23 23:58:01.203956 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:58:01.203964 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:58:01.203971 kernel: fb0: EFI VGA frame buffer device Jan 23 23:58:01.203980 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:58:01.203988 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:58:01.203995 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:58:01.204002 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:58:01.204010 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:58:01.204017 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:58:01.204024 kernel: Segment Routing with IPv6 Jan 23 23:58:01.204031 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:58:01.204039 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:58:01.204047 kernel: Key type dns_resolver registered Jan 23 23:58:01.204054 kernel: registered taskstats version 1 Jan 23 23:58:01.204061 kernel: Loading compiled-in X.509 certificates Jan 23 23:58:01.204069 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:58:01.204076 kernel: Key type .fscrypt registered Jan 23 23:58:01.204083 kernel: Key type fscrypt-provisioning registered Jan 23 23:58:01.204090 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:58:01.204097 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:58:01.204105 kernel: ima: No architecture policies found Jan 23 23:58:01.204113 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:58:01.204121 kernel: clk: Disabling unused clocks Jan 23 23:58:01.204128 kernel: Freeing unused kernel memory: 39424K Jan 23 23:58:01.204135 kernel: Run /init as init process Jan 23 23:58:01.204142 kernel: with arguments: Jan 23 23:58:01.204149 kernel: /init Jan 23 23:58:01.204156 kernel: with environment: Jan 23 23:58:01.204163 kernel: HOME=/ Jan 23 23:58:01.204171 kernel: TERM=linux Jan 23 23:58:01.204180 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:58:01.204191 systemd[1]: Detected virtualization microsoft. Jan 23 23:58:01.204199 systemd[1]: Detected architecture arm64. Jan 23 23:58:01.204206 systemd[1]: Running in initrd. Jan 23 23:58:01.204214 systemd[1]: No hostname configured, using default hostname. Jan 23 23:58:01.204221 systemd[1]: Hostname set to . Jan 23 23:58:01.204230 systemd[1]: Initializing machine ID from random generator. Jan 23 23:58:01.204240 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:58:01.204248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:58:01.204256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:58:01.204265 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:58:01.204272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:58:01.204280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:58:01.204288 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:58:01.204298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:58:01.204307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:58:01.204315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:58:01.204323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:58:01.204331 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:58:01.204338 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:58:01.204346 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:58:01.204354 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:58:01.204362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:58:01.204371 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:58:01.204379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:58:01.204387 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:58:01.204395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:58:01.204403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:58:01.204411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:58:01.204419 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:58:01.204427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:58:01.204436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:58:01.204444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:58:01.204452 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:58:01.204460 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:58:01.204468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:58:01.204490 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:58:01.204511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:01.204519 systemd-journald[217]: Journal started Jan 23 23:58:01.204538 systemd-journald[217]: Runtime Journal (/run/log/journal/e16717c89c884cea9486a351162b1531) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:58:01.207555 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:58:01.217821 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:58:01.231606 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:58:01.234206 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:58:01.245022 kernel: Bridge firewalling registered Jan 23 23:58:01.240136 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:58:01.241036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:58:01.250570 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:58:01.258017 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:58:01.270986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:01.285831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:58:01.299766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:58:01.311906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:58:01.324300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:58:01.335816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:01.352658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:58:01.357504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:58:01.367469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:58:01.386821 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:58:01.399318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:58:01.404723 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:58:01.423721 dracut-cmdline[252]: dracut-dracut-053 Jan 23 23:58:01.429443 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:58:01.460384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:58:01.480482 systemd-resolved[255]: Positive Trust Anchors: Jan 23 23:58:01.480497 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:58:01.480529 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:58:01.486346 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 23 23:58:01.487204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:58:01.544525 kernel: SCSI subsystem initialized Jan 23 23:58:01.499239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:58:01.552599 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:58:01.562616 kernel: iscsi: registered transport (tcp) Jan 23 23:58:01.578794 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:58:01.578824 kernel: QLogic iSCSI HBA Driver Jan 23 23:58:01.615955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:58:01.631744 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:58:01.659602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:58:01.659656 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:58:01.664716 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:58:01.711619 kernel: raid6: neonx8 gen() 15802 MB/s Jan 23 23:58:01.730596 kernel: raid6: neonx4 gen() 15698 MB/s Jan 23 23:58:01.749598 kernel: raid6: neonx2 gen() 13294 MB/s Jan 23 23:58:01.769594 kernel: raid6: neonx1 gen() 10545 MB/s Jan 23 23:58:01.788594 kernel: raid6: int64x8 gen() 6981 MB/s Jan 23 23:58:01.807594 kernel: raid6: int64x4 gen() 7374 MB/s Jan 23 23:58:01.827599 kernel: raid6: int64x2 gen() 6146 MB/s Jan 23 23:58:01.849326 kernel: raid6: int64x1 gen() 5071 MB/s Jan 23 23:58:01.849336 kernel: raid6: using algorithm neonx8 gen() 15802 MB/s Jan 23 23:58:01.871412 kernel: raid6: .... xor() 12044 MB/s, rmw enabled Jan 23 23:58:01.871432 kernel: raid6: using neon recovery algorithm Jan 23 23:58:01.878601 kernel: xor: measuring software checksum speed Jan 23 23:58:01.884160 kernel: 8regs : 18960 MB/sec Jan 23 23:58:01.884176 kernel: 32regs : 19707 MB/sec Jan 23 23:58:01.887715 kernel: arm64_neon : 27087 MB/sec Jan 23 23:58:01.891127 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 23 23:58:01.940768 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:58:01.951148 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:58:01.963705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:58:01.983375 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 23 23:58:01.987759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:58:02.004691 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:58:02.025025 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Jan 23 23:58:02.053157 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:58:02.065762 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:58:02.104431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:58:02.123754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:58:02.144109 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:58:02.152216 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:58:02.167270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:58:02.182737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:58:02.205778 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:58:02.222474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:58:02.222649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:02.240794 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:58:02.252782 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:58:02.252805 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:58:02.258568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:02.258827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:02.277904 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:02.308435 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:58:02.308469 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:58:02.308479 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:58:02.308488 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:58:02.308504 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:58:02.317454 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:58:02.317952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:02.346645 kernel: scsi host1: storvsc_host_t Jan 23 23:58:02.346799 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:58:02.346810 kernel: scsi host0: storvsc_host_t Jan 23 23:58:02.346915 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:58:02.323377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:58:02.375729 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:58:02.375901 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:58:02.376014 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:58:02.375940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:02.394525 kernel: PTP clock support registered Jan 23 23:58:02.394544 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:58:02.376035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:02.413753 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: VF slot 1 added Jan 23 23:58:02.413900 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:58:02.395112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:02.432659 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:58:02.432681 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:58:02.440801 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:58:02.440840 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:58:02.444572 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:58:02.687891 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:58:02.687929 kernel: hv_pci c272bd33-5bf5-4ddf-8bda-36c29ee11ce1: PCI VMBus probing: Using version 0x10004 Jan 23 23:58:02.683512 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 23 23:58:02.692931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:02.742755 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:58:02.742935 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:58:02.743028 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:58:02.743113 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:58:02.743196 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:58:02.743282 kernel: hv_pci c272bd33-5bf5-4ddf-8bda-36c29ee11ce1: PCI host bridge to bus 5bf5:00 Jan 23 23:58:02.743373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:02.743383 kernel: pci_bus 5bf5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:58:02.743530 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:58:02.743620 kernel: pci_bus 5bf5:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:58:02.715520 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:58:02.758472 kernel: pci 5bf5:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:58:02.765344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:58:02.773438 kernel: pci 5bf5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:58:02.777417 kernel: pci 5bf5:00:02.0: enabling Extended Tags Jan 23 23:58:02.796420 kernel: pci 5bf5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5bf5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:58:02.808408 kernel: pci_bus 5bf5:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:58:02.808583 kernel: pci 5bf5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:58:02.809624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:02.834477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:58:02.858240 kernel: mlx5_core 5bf5:00:02.0: enabling device (0000 -> 0002) Jan 23 23:58:02.864408 kernel: mlx5_core 5bf5:00:02.0: firmware version: 16.30.5026 Jan 23 23:58:03.063093 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: VF registering: eth1 Jan 23 23:58:03.063384 kernel: mlx5_core 5bf5:00:02.0 eth1: joined to eth0 Jan 23 23:58:03.069409 kernel: mlx5_core 5bf5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:58:03.083619 kernel: mlx5_core 5bf5:00:02.0 enP23541s1: renamed from eth1 Jan 23 23:58:03.239683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:58:03.291570 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (490) Jan 23 23:58:03.304201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:58:03.345426 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (486) Jan 23 23:58:03.357723 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:58:03.363320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:58:03.387538 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:58:03.398618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:58:03.421412 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:03.427407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:04.443439 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:04.444207 disk-uuid[611]: The operation has completed successfully. Jan 23 23:58:04.506169 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:58:04.508622 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:58:04.535533 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:58:04.548320 sh[724]: Success Jan 23 23:58:04.584418 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:58:04.863765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:58:04.871506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:58:04.876683 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:58:04.908826 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:58:04.908883 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:04.914202 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:58:04.918295 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:58:04.921809 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:58:05.279033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:58:05.283452 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:58:05.303670 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:58:05.312813 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:58:05.336642 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:05.336685 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:05.340310 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:58:05.378425 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:58:05.386209 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:58:05.395612 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:05.401794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:58:05.412593 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:58:05.426084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:58:05.442511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:58:05.465961 systemd-networkd[908]: lo: Link UP Jan 23 23:58:05.465971 systemd-networkd[908]: lo: Gained carrier Jan 23 23:58:05.467471 systemd-networkd[908]: Enumeration completed Jan 23 23:58:05.469016 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:58:05.469230 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:05.469233 systemd-networkd[908]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:58:05.474008 systemd[1]: Reached target network.target - Network. Jan 23 23:58:05.551413 kernel: mlx5_core 5bf5:00:02.0 enP23541s1: Link up Jan 23 23:58:05.589401 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: Data path switched to VF: enP23541s1 Jan 23 23:58:05.589861 systemd-networkd[908]: enP23541s1: Link UP Jan 23 23:58:05.589942 systemd-networkd[908]: eth0: Link UP Jan 23 23:58:05.590084 systemd-networkd[908]: eth0: Gained carrier Jan 23 23:58:05.590092 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:05.608734 systemd-networkd[908]: enP23541s1: Gained carrier Jan 23 23:58:05.622424 systemd-networkd[908]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:58:06.368087 ignition[895]: Ignition 2.19.0 Jan 23 23:58:06.368097 ignition[895]: Stage: fetch-offline Jan 23 23:58:06.371212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:58:06.368133 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.384585 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:58:06.368140 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.368224 ignition[895]: parsed url from cmdline: "" Jan 23 23:58:06.368227 ignition[895]: no config URL provided Jan 23 23:58:06.368231 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:58:06.368238 ignition[895]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:58:06.368242 ignition[895]: failed to fetch config: resource requires networking Jan 23 23:58:06.368403 ignition[895]: Ignition finished successfully Jan 23 23:58:06.405477 ignition[922]: Ignition 2.19.0 Jan 23 23:58:06.405482 ignition[922]: Stage: fetch Jan 23 23:58:06.405642 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.405651 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.405733 ignition[922]: parsed url from cmdline: "" Jan 23 23:58:06.405736 ignition[922]: no config URL provided Jan 23 23:58:06.405740 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:58:06.405747 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:58:06.405765 ignition[922]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:58:06.538903 ignition[922]: GET result: OK Jan 23 23:58:06.539007 ignition[922]: config has been read from IMDS userdata Jan 23 23:58:06.539049 ignition[922]: parsing config with SHA512: 74c8fb60f2dd99e442dcc763d6089bd21c2fbe3bd5b3438ca1cffc09da48d8fe5743fd586764a6f1333af95627e50da32845e56afcedbc49c990b25ef3f6be92 Jan 23 23:58:06.544907 unknown[922]: fetched base config from "system" Jan 23 23:58:06.545245 ignition[922]: fetch: fetch complete Jan 23 23:58:06.544914 unknown[922]: fetched base config from "system" Jan 23 23:58:06.545249 ignition[922]: fetch: fetch passed Jan 23 23:58:06.544919 unknown[922]: fetched user config from "azure" Jan 23 23:58:06.545282 ignition[922]: Ignition finished successfully Jan 23 23:58:06.549048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:58:06.571597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:58:06.592071 ignition[929]: Ignition 2.19.0 Jan 23 23:58:06.592082 ignition[929]: Stage: kargs Jan 23 23:58:06.592240 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.599296 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:58:06.592253 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.593223 ignition[929]: kargs: kargs passed Jan 23 23:58:06.593266 ignition[929]: Ignition finished successfully Jan 23 23:58:06.622824 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:58:06.637273 ignition[935]: Ignition 2.19.0 Jan 23 23:58:06.637286 ignition[935]: Stage: disks Jan 23 23:58:06.642315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:58:06.637464 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.647477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:58:06.637475 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.656243 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:58:06.638356 ignition[935]: disks: disks passed Jan 23 23:58:06.665214 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:58:06.638489 ignition[935]: Ignition finished successfully Jan 23 23:58:06.674463 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:58:06.683614 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:58:06.708636 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:58:06.779660 systemd-fsck[944]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:58:06.788922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:58:06.802560 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:58:06.855420 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:58:06.855815 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:58:06.859749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:58:06.903480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:58:06.926409 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (955) Jan 23 23:58:06.926500 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:58:06.948107 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:06.948126 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:06.948136 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:58:06.949594 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:58:06.961122 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:58:06.961159 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:58:06.977248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:58:06.994626 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:58:07.000053 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:58:07.005565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:58:07.369553 systemd-networkd[908]: eth0: Gained IPv6LL Jan 23 23:58:07.459998 coreos-metadata[957]: Jan 23 23:58:07.459 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:58:07.468375 coreos-metadata[957]: Jan 23 23:58:07.468 INFO Fetch successful Jan 23 23:58:07.473384 coreos-metadata[957]: Jan 23 23:58:07.473 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:58:07.492990 coreos-metadata[957]: Jan 23 23:58:07.492 INFO Fetch successful Jan 23 23:58:07.509017 coreos-metadata[957]: Jan 23 23:58:07.508 INFO wrote hostname ci-4081.3.6-n-2a642b76b3 to /sysroot/etc/hostname Jan 23 23:58:07.517380 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:58:07.695018 initrd-setup-root[984]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:58:07.716949 initrd-setup-root[991]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:58:07.739357 initrd-setup-root[998]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:58:07.746775 initrd-setup-root[1005]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:58:08.979005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:58:08.992629 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:58:09.000544 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:58:09.016823 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:09.015158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:58:09.034424 ignition[1072]: INFO : Ignition 2.19.0 Jan 23 23:58:09.034424 ignition[1072]: INFO : Stage: mount Jan 23 23:58:09.034424 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:09.034424 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:09.052036 ignition[1072]: INFO : mount: mount passed Jan 23 23:58:09.052036 ignition[1072]: INFO : Ignition finished successfully Jan 23 23:58:09.042993 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:58:09.070553 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:58:09.083411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:58:09.098588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:58:09.117404 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Jan 23 23:58:09.127989 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:09.128005 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:09.131365 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:58:09.139406 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:58:09.140042 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:58:09.163682 ignition[1101]: INFO : Ignition 2.19.0 Jan 23 23:58:09.163682 ignition[1101]: INFO : Stage: files Jan 23 23:58:09.170085 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:09.170085 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:09.170085 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:58:09.184056 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:58:09.184056 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:58:09.286847 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:58:09.293127 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:58:09.293127 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:58:09.287208 unknown[1101]: wrote ssh authorized keys file for user: core Jan 23 23:58:09.308360 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:58:09.308360 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:58:09.340732 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:58:09.474420 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:58:09.474420 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:58:09.906077 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:58:10.209222 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:10.209222 ignition[1101]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:58:10.224421 ignition[1101]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: files passed Jan 23 23:58:10.233681 ignition[1101]: INFO : Ignition finished successfully Jan 23 23:58:10.239669 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:58:10.253605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:58:10.266547 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:58:10.323874 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:58:10.323874 initrd-setup-root-after-ignition[1129]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:58:10.277202 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:58:10.342430 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:58:10.277288 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:58:10.338267 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:58:10.348023 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:58:10.374609 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:58:10.401102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:58:10.401219 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:58:10.410861 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:58:10.420642 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:58:10.429410 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:58:10.431547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:58:10.458795 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:58:10.478540 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:58:10.496151 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:58:10.501359 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:58:10.511544 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:58:10.520359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:58:10.520422 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:58:10.533547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:58:10.542948 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:58:10.551343 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:58:10.559886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:58:10.569352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:58:10.579012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:58:10.588031 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:58:10.597488 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:58:10.607248 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:58:10.615686 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:58:10.623364 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:58:10.623432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:58:10.635386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:58:10.644557 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:58:10.654350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:58:10.659108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:58:10.664670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:58:10.664731 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:58:10.679780 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:58:10.679826 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:58:10.689294 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:58:10.689331 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:58:10.698008 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:58:10.698044 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:58:10.723565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:58:10.736543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:58:10.736612 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:58:10.762206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:58:10.772525 ignition[1154]: INFO : Ignition 2.19.0 Jan 23 23:58:10.772525 ignition[1154]: INFO : Stage: umount Jan 23 23:58:10.772525 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:10.772525 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:10.772525 ignition[1154]: INFO : umount: umount passed Jan 23 23:58:10.772525 ignition[1154]: INFO : Ignition finished successfully Jan 23 23:58:10.771601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:58:10.771659 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:58:10.777154 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:58:10.777191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:58:10.791549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:58:10.792093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:58:10.792178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:58:10.803621 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:58:10.805418 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:58:10.823340 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:58:10.823447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:58:10.832771 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:58:10.832816 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:58:10.837362 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:58:10.837408 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:58:10.847432 systemd[1]: Stopped target network.target - Network. Jan 23 23:58:10.851205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:58:10.851245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:58:10.860655 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:58:10.868960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:58:10.873154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:58:10.878823 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:58:10.888150 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:58:10.897404 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:58:10.897459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:58:10.905746 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:58:10.905779 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:58:10.914400 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:58:10.914448 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:58:10.923485 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:58:10.923521 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:58:10.932261 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:58:10.945455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:58:10.953667 systemd-networkd[908]: eth0: DHCPv6 lease lost Jan 23 23:58:10.955099 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:58:10.956426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:58:10.964184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:58:11.128993 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: Data path switched from VF: enP23541s1 Jan 23 23:58:10.964245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:58:10.987619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:58:10.995922 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:58:10.995987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:58:11.007901 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:58:11.026003 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:58:11.026095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:58:11.035800 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:58:11.037427 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:58:11.057695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:58:11.057796 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:58:11.067212 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:58:11.067245 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:58:11.078041 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:58:11.078091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:58:11.092016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:58:11.092097 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:58:11.104203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:58:11.104250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:11.138604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:58:11.151514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:58:11.151574 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:58:11.162499 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:58:11.162541 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:58:11.173268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:58:11.173312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:58:11.183028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:58:11.183073 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:58:11.192975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:11.193064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:11.202684 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:58:11.202792 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:58:11.212186 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:58:11.212263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:58:11.220792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:58:11.220864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:58:11.235612 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:58:11.244081 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:58:11.244150 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:58:11.270629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:58:11.423464 systemd[1]: Switching root. Jan 23 23:58:11.512640 systemd-journald[217]: Journal stopped Jan 23 23:58:01.200144 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:58:01.200165 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:58:01.200173 kernel: KASLR enabled Jan 23 23:58:01.200179 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:58:01.200186 kernel: printk: bootconsole [pl11] enabled Jan 23 23:58:01.200191 kernel: efi: EFI v2.7 by EDK II Jan 23 23:58:01.200199 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:58:01.200205 kernel: random: crng init done Jan 23 23:58:01.200211 kernel: ACPI: Early table checksum verification disabled Jan 23 23:58:01.200217 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:58:01.200223 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200229 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200236 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:58:01.200242 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200250 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200256 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200263 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200270 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200276 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200283 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:58:01.200289 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:58:01.200296 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:58:01.200302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:58:01.200308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:58:01.200314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:58:01.200321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:58:01.200327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:58:01.200333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:58:01.200341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:58:01.200347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:58:01.200354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:58:01.200360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:58:01.200366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:58:01.200373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:58:01.200379 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 23 23:58:01.200385 kernel: Zone ranges: Jan 23 23:58:01.200392 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:58:01.200398 kernel: DMA32 empty Jan 23 23:58:01.200404 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:58:01.200410 kernel: Movable zone start for each node Jan 23 23:58:01.200420 kernel: Early memory node ranges Jan 23 23:58:01.200427 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:58:01.200434 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:58:01.200440 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:58:01.200447 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:58:01.200455 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:58:01.200462 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:58:01.200468 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:58:01.200475 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:58:01.200482 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:58:01.200489 kernel: psci: probing for conduit method from ACPI. Jan 23 23:58:01.200496 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:58:01.200502 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:58:01.200509 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:58:01.200516 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:58:01.200522 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:58:01.200529 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:58:01.200537 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:58:01.200544 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:58:01.200551 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:58:01.200557 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:58:01.200564 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:58:01.200571 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:58:01.200578 kernel: CPU features: detected: Spectre-BHB Jan 23 23:58:01.200584 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:58:01.201631 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:58:01.201640 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:58:01.201647 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:58:01.201657 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:58:01.201664 kernel: alternatives: applying boot alternatives Jan 23 23:58:01.201672 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:58:01.201680 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:58:01.201687 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:58:01.201693 kernel: Fallback order for Node 0: 0 Jan 23 23:58:01.201700 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:58:01.201708 kernel: Policy zone: Normal Jan 23 23:58:01.201715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:58:01.201722 kernel: software IO TLB: area num 2. Jan 23 23:58:01.201729 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:58:01.201737 kernel: Memory: 3982632K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211528K reserved, 0K cma-reserved) Jan 23 23:58:01.201744 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:58:01.201751 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:58:01.201758 kernel: rcu: RCU event tracing is enabled. Jan 23 23:58:01.201765 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:58:01.201772 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:58:01.201779 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:58:01.201786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:58:01.201793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:58:01.201800 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:58:01.201806 kernel: GICv3: 960 SPIs implemented Jan 23 23:58:01.201814 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:58:01.201821 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:58:01.201828 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:58:01.201834 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:58:01.201841 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:58:01.201848 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:58:01.201855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:58:01.201862 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:58:01.201869 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:58:01.201876 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:58:01.201883 kernel: Console: colour dummy device 80x25 Jan 23 23:58:01.201891 kernel: printk: console [tty1] enabled Jan 23 23:58:01.201898 kernel: ACPI: Core revision 20230628 Jan 23 23:58:01.201905 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:58:01.201912 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:58:01.201919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:58:01.201926 kernel: landlock: Up and running. Jan 23 23:58:01.201933 kernel: SELinux: Initializing. Jan 23 23:58:01.201940 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.201947 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.201955 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:58:01.201962 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:58:01.201969 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:58:01.201976 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:58:01.201983 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:58:01.201990 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:58:01.201997 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:58:01.202004 kernel: Remapping and enabling EFI services. Jan 23 23:58:01.202017 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:58:01.202025 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:58:01.202032 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:58:01.202039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:58:01.202048 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:58:01.202055 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:58:01.202063 kernel: SMP: Total of 2 processors activated. Jan 23 23:58:01.202070 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:58:01.202078 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:58:01.202087 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:58:01.202094 kernel: CPU features: detected: CRC32 instructions Jan 23 23:58:01.202101 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:58:01.202109 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:58:01.202116 kernel: CPU features: detected: Privileged Access Never Jan 23 23:58:01.202123 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:58:01.202130 kernel: alternatives: applying system-wide alternatives Jan 23 23:58:01.202138 kernel: devtmpfs: initialized Jan 23 23:58:01.202145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:58:01.202154 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:58:01.202161 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:58:01.202168 kernel: SMBIOS 3.1.0 present. Jan 23 23:58:01.202176 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:58:01.202184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:58:01.202191 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:58:01.202198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:58:01.202206 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:58:01.202213 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:58:01.202222 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:58:01.202229 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:58:01.202237 kernel: cpuidle: using governor menu Jan 23 23:58:01.202244 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:58:01.202252 kernel: ASID allocator initialised with 32768 entries Jan 23 23:58:01.202259 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:58:01.202266 kernel: Serial: AMBA PL011 UART driver Jan 23 23:58:01.202273 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:58:01.202281 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:58:01.202290 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:58:01.202297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:58:01.202305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:58:01.202312 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:58:01.202319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:58:01.202327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:58:01.202334 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:58:01.202341 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:58:01.202348 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:58:01.202357 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:58:01.202365 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:58:01.202372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:58:01.202379 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:58:01.202386 kernel: ACPI: Interpreter enabled Jan 23 23:58:01.202394 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:58:01.202401 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:58:01.202408 kernel: printk: console [ttyAMA0] enabled Jan 23 23:58:01.202416 kernel: printk: bootconsole [pl11] disabled Jan 23 23:58:01.202424 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:58:01.202432 kernel: iommu: Default domain type: Translated Jan 23 23:58:01.202439 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:58:01.202446 kernel: efivars: Registered efivars operations Jan 23 23:58:01.202453 kernel: vgaarb: loaded Jan 23 23:58:01.202461 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:58:01.202468 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:58:01.202475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:58:01.202482 kernel: pnp: PnP ACPI init Jan 23 23:58:01.202491 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:58:01.202498 kernel: NET: Registered PF_INET protocol family Jan 23 23:58:01.202505 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:58:01.202513 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:58:01.202520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:58:01.202528 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:58:01.202535 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:58:01.202542 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:58:01.202550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.202558 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:58:01.202566 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:58:01.202573 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:58:01.202581 kernel: kvm [1]: HYP mode not available Jan 23 23:58:01.203613 kernel: Initialise system trusted keyrings Jan 23 23:58:01.203626 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:58:01.203634 kernel: Key type asymmetric registered Jan 23 23:58:01.203641 kernel: Asymmetric key parser 'x509' registered Jan 23 23:58:01.203648 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:58:01.203660 kernel: io scheduler mq-deadline registered Jan 23 23:58:01.203667 kernel: io scheduler kyber registered Jan 23 23:58:01.203675 kernel: io scheduler bfq registered Jan 23 23:58:01.203682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:58:01.203689 kernel: thunder_xcv, ver 1.0 Jan 23 23:58:01.203696 kernel: thunder_bgx, ver 1.0 Jan 23 23:58:01.203704 kernel: nicpf, ver 1.0 Jan 23 23:58:01.203711 kernel: nicvf, ver 1.0 Jan 23 23:58:01.203843 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:58:01.203916 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:58:00 UTC (1769212680) Jan 23 23:58:01.203926 kernel: efifb: probing for efifb Jan 23 23:58:01.203934 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:58:01.203941 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:58:01.203949 kernel: efifb: scrolling: redraw Jan 23 23:58:01.203956 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:58:01.203964 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:58:01.203971 kernel: fb0: EFI VGA frame buffer device Jan 23 23:58:01.203980 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:58:01.203988 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:58:01.203995 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:58:01.204002 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:58:01.204010 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:58:01.204017 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:58:01.204024 kernel: Segment Routing with IPv6 Jan 23 23:58:01.204031 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:58:01.204039 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:58:01.204047 kernel: Key type dns_resolver registered Jan 23 23:58:01.204054 kernel: registered taskstats version 1 Jan 23 23:58:01.204061 kernel: Loading compiled-in X.509 certificates Jan 23 23:58:01.204069 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:58:01.204076 kernel: Key type .fscrypt registered Jan 23 23:58:01.204083 kernel: Key type fscrypt-provisioning registered Jan 23 23:58:01.204090 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:58:01.204097 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:58:01.204105 kernel: ima: No architecture policies found Jan 23 23:58:01.204113 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:58:01.204121 kernel: clk: Disabling unused clocks Jan 23 23:58:01.204128 kernel: Freeing unused kernel memory: 39424K Jan 23 23:58:01.204135 kernel: Run /init as init process Jan 23 23:58:01.204142 kernel: with arguments: Jan 23 23:58:01.204149 kernel: /init Jan 23 23:58:01.204156 kernel: with environment: Jan 23 23:58:01.204163 kernel: HOME=/ Jan 23 23:58:01.204171 kernel: TERM=linux Jan 23 23:58:01.204180 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:58:01.204191 systemd[1]: Detected virtualization microsoft. Jan 23 23:58:01.204199 systemd[1]: Detected architecture arm64. Jan 23 23:58:01.204206 systemd[1]: Running in initrd. Jan 23 23:58:01.204214 systemd[1]: No hostname configured, using default hostname. Jan 23 23:58:01.204221 systemd[1]: Hostname set to . Jan 23 23:58:01.204230 systemd[1]: Initializing machine ID from random generator. Jan 23 23:58:01.204240 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:58:01.204248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:58:01.204256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:58:01.204265 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:58:01.204272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:58:01.204280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:58:01.204288 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:58:01.204298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:58:01.204307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:58:01.204315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:58:01.204323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:58:01.204331 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:58:01.204338 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:58:01.204346 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:58:01.204354 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:58:01.204362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:58:01.204371 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:58:01.204379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:58:01.204387 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:58:01.204395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:58:01.204403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:58:01.204411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:58:01.204419 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:58:01.204427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:58:01.204436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:58:01.204444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:58:01.204452 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:58:01.204460 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:58:01.204468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:58:01.204490 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:58:01.204511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:01.204519 systemd-journald[217]: Journal started Jan 23 23:58:01.204538 systemd-journald[217]: Runtime Journal (/run/log/journal/e16717c89c884cea9486a351162b1531) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:58:01.207555 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:58:01.217821 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:58:01.231606 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:58:01.234206 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:58:01.245022 kernel: Bridge firewalling registered Jan 23 23:58:01.240136 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:58:01.241036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:58:01.250570 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:58:01.258017 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:58:01.270986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:01.285831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:58:01.299766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:58:01.311906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:58:01.324300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:58:01.335816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:01.352658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:58:01.357504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:58:01.367469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:58:01.386821 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:58:01.399318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:58:01.404723 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:58:01.423721 dracut-cmdline[252]: dracut-dracut-053 Jan 23 23:58:01.429443 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:58:01.460384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:58:01.480482 systemd-resolved[255]: Positive Trust Anchors: Jan 23 23:58:01.480497 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:58:01.480529 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:58:01.486346 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 23 23:58:01.487204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:58:01.544525 kernel: SCSI subsystem initialized Jan 23 23:58:01.499239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:58:01.552599 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:58:01.562616 kernel: iscsi: registered transport (tcp) Jan 23 23:58:01.578794 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:58:01.578824 kernel: QLogic iSCSI HBA Driver Jan 23 23:58:01.615955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:58:01.631744 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:58:01.659602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:58:01.659656 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:58:01.664716 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:58:01.711619 kernel: raid6: neonx8 gen() 15802 MB/s Jan 23 23:58:01.730596 kernel: raid6: neonx4 gen() 15698 MB/s Jan 23 23:58:01.749598 kernel: raid6: neonx2 gen() 13294 MB/s Jan 23 23:58:01.769594 kernel: raid6: neonx1 gen() 10545 MB/s Jan 23 23:58:01.788594 kernel: raid6: int64x8 gen() 6981 MB/s Jan 23 23:58:01.807594 kernel: raid6: int64x4 gen() 7374 MB/s Jan 23 23:58:01.827599 kernel: raid6: int64x2 gen() 6146 MB/s Jan 23 23:58:01.849326 kernel: raid6: int64x1 gen() 5071 MB/s Jan 23 23:58:01.849336 kernel: raid6: using algorithm neonx8 gen() 15802 MB/s Jan 23 23:58:01.871412 kernel: raid6: .... xor() 12044 MB/s, rmw enabled Jan 23 23:58:01.871432 kernel: raid6: using neon recovery algorithm Jan 23 23:58:01.878601 kernel: xor: measuring software checksum speed Jan 23 23:58:01.884160 kernel: 8regs : 18960 MB/sec Jan 23 23:58:01.884176 kernel: 32regs : 19707 MB/sec Jan 23 23:58:01.887715 kernel: arm64_neon : 27087 MB/sec Jan 23 23:58:01.891127 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 23 23:58:01.940768 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:58:01.951148 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:58:01.963705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:58:01.983375 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 23 23:58:01.987759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:58:02.004691 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:58:02.025025 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Jan 23 23:58:02.053157 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:58:02.065762 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:58:02.104431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:58:02.123754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:58:02.144109 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:58:02.152216 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:58:02.167270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:58:02.182737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:58:02.205778 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:58:02.222474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:58:02.222649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:02.240794 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:58:02.252782 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:58:02.252805 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:58:02.258568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:02.258827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:02.277904 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:02.308435 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:58:02.308469 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:58:02.308479 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:58:02.308488 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:58:02.308504 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:58:02.317454 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:58:02.317952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:02.346645 kernel: scsi host1: storvsc_host_t Jan 23 23:58:02.346799 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:58:02.346810 kernel: scsi host0: storvsc_host_t Jan 23 23:58:02.346915 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:58:02.323377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:58:02.375729 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:58:02.375901 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:58:02.376014 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:58:02.375940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:02.394525 kernel: PTP clock support registered Jan 23 23:58:02.394544 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:58:02.376035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:02.413753 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: VF slot 1 added Jan 23 23:58:02.413900 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:58:02.395112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:02.432659 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:58:02.432681 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:58:02.440801 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:58:02.440840 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:58:02.444572 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:58:02.687891 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:58:02.687929 kernel: hv_pci c272bd33-5bf5-4ddf-8bda-36c29ee11ce1: PCI VMBus probing: Using version 0x10004 Jan 23 23:58:02.683512 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 23 23:58:02.692931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:02.742755 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:58:02.742935 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:58:02.743028 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:58:02.743113 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:58:02.743196 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:58:02.743282 kernel: hv_pci c272bd33-5bf5-4ddf-8bda-36c29ee11ce1: PCI host bridge to bus 5bf5:00 Jan 23 23:58:02.743373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:02.743383 kernel: pci_bus 5bf5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:58:02.743530 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:58:02.743620 kernel: pci_bus 5bf5:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:58:02.715520 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:58:02.758472 kernel: pci 5bf5:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:58:02.765344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:58:02.773438 kernel: pci 5bf5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:58:02.777417 kernel: pci 5bf5:00:02.0: enabling Extended Tags Jan 23 23:58:02.796420 kernel: pci 5bf5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5bf5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:58:02.808408 kernel: pci_bus 5bf5:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:58:02.808583 kernel: pci 5bf5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:58:02.809624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:02.834477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:58:02.858240 kernel: mlx5_core 5bf5:00:02.0: enabling device (0000 -> 0002) Jan 23 23:58:02.864408 kernel: mlx5_core 5bf5:00:02.0: firmware version: 16.30.5026 Jan 23 23:58:03.063093 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: VF registering: eth1 Jan 23 23:58:03.063384 kernel: mlx5_core 5bf5:00:02.0 eth1: joined to eth0 Jan 23 23:58:03.069409 kernel: mlx5_core 5bf5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:58:03.083619 kernel: mlx5_core 5bf5:00:02.0 enP23541s1: renamed from eth1 Jan 23 23:58:03.239683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:58:03.291570 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (490) Jan 23 23:58:03.304201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:58:03.345426 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (486) Jan 23 23:58:03.357723 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:58:03.363320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:58:03.387538 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:58:03.398618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:58:03.421412 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:03.427407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:04.443439 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:58:04.444207 disk-uuid[611]: The operation has completed successfully. Jan 23 23:58:04.506169 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:58:04.508622 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:58:04.535533 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:58:04.548320 sh[724]: Success Jan 23 23:58:04.584418 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:58:04.863765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:58:04.871506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:58:04.876683 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:58:04.908826 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:58:04.908883 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:04.914202 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:58:04.918295 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:58:04.921809 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:58:05.279033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:58:05.283452 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:58:05.303670 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:58:05.312813 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:58:05.336642 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:05.336685 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:05.340310 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:58:05.378425 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:58:05.386209 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:58:05.395612 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:05.401794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:58:05.412593 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:58:05.426084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:58:05.442511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:58:05.465961 systemd-networkd[908]: lo: Link UP Jan 23 23:58:05.465971 systemd-networkd[908]: lo: Gained carrier Jan 23 23:58:05.467471 systemd-networkd[908]: Enumeration completed Jan 23 23:58:05.469016 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:58:05.469230 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:05.469233 systemd-networkd[908]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:58:05.474008 systemd[1]: Reached target network.target - Network. Jan 23 23:58:05.551413 kernel: mlx5_core 5bf5:00:02.0 enP23541s1: Link up Jan 23 23:58:05.589401 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: Data path switched to VF: enP23541s1 Jan 23 23:58:05.589861 systemd-networkd[908]: enP23541s1: Link UP Jan 23 23:58:05.589942 systemd-networkd[908]: eth0: Link UP Jan 23 23:58:05.590084 systemd-networkd[908]: eth0: Gained carrier Jan 23 23:58:05.590092 systemd-networkd[908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:05.608734 systemd-networkd[908]: enP23541s1: Gained carrier Jan 23 23:58:05.622424 systemd-networkd[908]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:58:06.368087 ignition[895]: Ignition 2.19.0 Jan 23 23:58:06.368097 ignition[895]: Stage: fetch-offline Jan 23 23:58:06.371212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:58:06.368133 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.384585 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:58:06.368140 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.368224 ignition[895]: parsed url from cmdline: "" Jan 23 23:58:06.368227 ignition[895]: no config URL provided Jan 23 23:58:06.368231 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:58:06.368238 ignition[895]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:58:06.368242 ignition[895]: failed to fetch config: resource requires networking Jan 23 23:58:06.368403 ignition[895]: Ignition finished successfully Jan 23 23:58:06.405477 ignition[922]: Ignition 2.19.0 Jan 23 23:58:06.405482 ignition[922]: Stage: fetch Jan 23 23:58:06.405642 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.405651 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.405733 ignition[922]: parsed url from cmdline: "" Jan 23 23:58:06.405736 ignition[922]: no config URL provided Jan 23 23:58:06.405740 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:58:06.405747 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:58:06.405765 ignition[922]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:58:06.538903 ignition[922]: GET result: OK Jan 23 23:58:06.539007 ignition[922]: config has been read from IMDS userdata Jan 23 23:58:06.539049 ignition[922]: parsing config with SHA512: 74c8fb60f2dd99e442dcc763d6089bd21c2fbe3bd5b3438ca1cffc09da48d8fe5743fd586764a6f1333af95627e50da32845e56afcedbc49c990b25ef3f6be92 Jan 23 23:58:06.544907 unknown[922]: fetched base config from "system" Jan 23 23:58:06.545245 ignition[922]: fetch: fetch complete Jan 23 23:58:06.544914 unknown[922]: fetched base config from "system" Jan 23 23:58:06.545249 ignition[922]: fetch: fetch passed Jan 23 23:58:06.544919 unknown[922]: fetched user config from "azure" Jan 23 23:58:06.545282 ignition[922]: Ignition finished successfully Jan 23 23:58:06.549048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:58:06.571597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:58:06.592071 ignition[929]: Ignition 2.19.0 Jan 23 23:58:06.592082 ignition[929]: Stage: kargs Jan 23 23:58:06.592240 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.599296 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:58:06.592253 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.593223 ignition[929]: kargs: kargs passed Jan 23 23:58:06.593266 ignition[929]: Ignition finished successfully Jan 23 23:58:06.622824 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:58:06.637273 ignition[935]: Ignition 2.19.0 Jan 23 23:58:06.637286 ignition[935]: Stage: disks Jan 23 23:58:06.642315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:58:06.637464 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:06.647477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:58:06.637475 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:06.656243 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:58:06.638356 ignition[935]: disks: disks passed Jan 23 23:58:06.665214 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:58:06.638489 ignition[935]: Ignition finished successfully Jan 23 23:58:06.674463 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:58:06.683614 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:58:06.708636 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:58:06.779660 systemd-fsck[944]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:58:06.788922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:58:06.802560 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:58:06.855420 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:58:06.855815 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:58:06.859749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:58:06.903480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:58:06.926409 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (955) Jan 23 23:58:06.926500 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:58:06.948107 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:06.948126 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:06.948136 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:58:06.949594 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:58:06.961122 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:58:06.961159 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:58:06.977248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:58:06.994626 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:58:07.000053 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:58:07.005565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:58:07.369553 systemd-networkd[908]: eth0: Gained IPv6LL Jan 23 23:58:07.459998 coreos-metadata[957]: Jan 23 23:58:07.459 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:58:07.468375 coreos-metadata[957]: Jan 23 23:58:07.468 INFO Fetch successful Jan 23 23:58:07.473384 coreos-metadata[957]: Jan 23 23:58:07.473 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:58:07.492990 coreos-metadata[957]: Jan 23 23:58:07.492 INFO Fetch successful Jan 23 23:58:07.509017 coreos-metadata[957]: Jan 23 23:58:07.508 INFO wrote hostname ci-4081.3.6-n-2a642b76b3 to /sysroot/etc/hostname Jan 23 23:58:07.517380 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:58:07.695018 initrd-setup-root[984]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:58:07.716949 initrd-setup-root[991]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:58:07.739357 initrd-setup-root[998]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:58:07.746775 initrd-setup-root[1005]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:58:08.979005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:58:08.992629 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:58:09.000544 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:58:09.016823 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:09.015158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:58:09.034424 ignition[1072]: INFO : Ignition 2.19.0 Jan 23 23:58:09.034424 ignition[1072]: INFO : Stage: mount Jan 23 23:58:09.034424 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:09.034424 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:09.052036 ignition[1072]: INFO : mount: mount passed Jan 23 23:58:09.052036 ignition[1072]: INFO : Ignition finished successfully Jan 23 23:58:09.042993 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:58:09.070553 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:58:09.083411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:58:09.098588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:58:09.117404 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Jan 23 23:58:09.127989 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:58:09.128005 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:58:09.131365 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:58:09.139406 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:58:09.140042 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:58:09.163682 ignition[1101]: INFO : Ignition 2.19.0 Jan 23 23:58:09.163682 ignition[1101]: INFO : Stage: files Jan 23 23:58:09.170085 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:09.170085 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:09.170085 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:58:09.184056 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:58:09.184056 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:58:09.286847 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:58:09.293127 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:58:09.293127 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:58:09.287208 unknown[1101]: wrote ssh authorized keys file for user: core Jan 23 23:58:09.308360 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:58:09.308360 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:58:09.340732 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:58:09.474420 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:58:09.474420 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:09.491068 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:58:09.906077 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:58:10.209222 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:58:10.209222 ignition[1101]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:58:10.224421 ignition[1101]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:58:10.233681 ignition[1101]: INFO : files: files passed Jan 23 23:58:10.233681 ignition[1101]: INFO : Ignition finished successfully Jan 23 23:58:10.239669 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:58:10.253605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:58:10.266547 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:58:10.323874 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:58:10.323874 initrd-setup-root-after-ignition[1129]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:58:10.277202 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:58:10.342430 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:58:10.277288 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:58:10.338267 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:58:10.348023 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:58:10.374609 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:58:10.401102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:58:10.401219 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:58:10.410861 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:58:10.420642 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:58:10.429410 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:58:10.431547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:58:10.458795 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:58:10.478540 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:58:10.496151 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:58:10.501359 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:58:10.511544 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:58:10.520359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:58:10.520422 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:58:10.533547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:58:10.542948 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:58:10.551343 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:58:10.559886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:58:10.569352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:58:10.579012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:58:10.588031 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:58:10.597488 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:58:10.607248 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:58:10.615686 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:58:10.623364 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:58:10.623432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:58:10.635386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:58:10.644557 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:58:10.654350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:58:10.659108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:58:10.664670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:58:10.664731 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:58:10.679780 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:58:10.679826 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:58:10.689294 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:58:10.689331 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:58:10.698008 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:58:10.698044 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:58:10.723565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:58:10.736543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:58:10.736612 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:58:10.762206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:58:10.772525 ignition[1154]: INFO : Ignition 2.19.0 Jan 23 23:58:10.772525 ignition[1154]: INFO : Stage: umount Jan 23 23:58:10.772525 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:58:10.772525 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:58:10.772525 ignition[1154]: INFO : umount: umount passed Jan 23 23:58:10.772525 ignition[1154]: INFO : Ignition finished successfully Jan 23 23:58:10.771601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:58:10.771659 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:58:10.777154 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:58:10.777191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:58:10.791549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:58:10.792093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:58:10.792178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:58:10.803621 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:58:10.805418 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:58:10.823340 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:58:10.823447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:58:10.832771 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:58:10.832816 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:58:10.837362 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:58:10.837408 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:58:10.847432 systemd[1]: Stopped target network.target - Network. Jan 23 23:58:10.851205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:58:10.851245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:58:10.860655 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:58:10.868960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:58:10.873154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:58:10.878823 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:58:10.888150 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:58:10.897404 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:58:10.897459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:58:10.905746 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:58:10.905779 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:58:10.914400 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:58:10.914448 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:58:10.923485 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:58:10.923521 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:58:10.932261 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:58:10.945455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:58:10.953667 systemd-networkd[908]: eth0: DHCPv6 lease lost Jan 23 23:58:10.955099 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:58:10.956426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:58:10.964184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:58:11.128993 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: Data path switched from VF: enP23541s1 Jan 23 23:58:10.964245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:58:10.987619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:58:10.995922 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:58:10.995987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:58:11.007901 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:58:11.026003 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:58:11.026095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:58:11.035800 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:58:11.037427 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:58:11.057695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:58:11.057796 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:58:11.067212 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:58:11.067245 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:58:11.078041 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:58:11.078091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:58:11.092016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:58:11.092097 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:58:11.104203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:58:11.104250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:58:11.138604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:58:11.151514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:58:11.151574 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:58:11.162499 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:58:11.162541 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:58:11.173268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:58:11.173312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:58:11.183028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:58:11.183073 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:58:11.192975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:11.193064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:11.202684 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:58:11.202792 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:58:11.212186 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:58:11.212263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:58:11.220792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:58:11.220864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:58:11.235612 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:58:11.244081 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:58:11.244150 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:58:11.270629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:58:11.423464 systemd[1]: Switching root. Jan 23 23:58:11.512640 systemd-journald[217]: Journal stopped Jan 23 23:58:16.044453 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 23 23:58:16.044476 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:58:16.044487 kernel: SELinux: policy capability open_perms=1 Jan 23 23:58:16.044497 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:58:16.044505 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:58:16.044512 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:58:16.044521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:58:16.044529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:58:16.044537 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:58:16.044545 kernel: audit: type=1403 audit(1769212692.713:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:58:16.044555 systemd[1]: Successfully loaded SELinux policy in 176.435ms. Jan 23 23:58:16.044565 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.931ms. Jan 23 23:58:16.044575 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:58:16.044584 systemd[1]: Detected virtualization microsoft. Jan 23 23:58:16.044594 systemd[1]: Detected architecture arm64. Jan 23 23:58:16.044606 systemd[1]: Detected first boot. Jan 23 23:58:16.044615 systemd[1]: Hostname set to . Jan 23 23:58:16.044624 systemd[1]: Initializing machine ID from random generator. Jan 23 23:58:16.044633 zram_generator::config[1194]: No configuration found. Jan 23 23:58:16.044643 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:58:16.044652 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:58:16.044662 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:58:16.044671 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:58:16.044681 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:58:16.044690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:58:16.044699 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:58:16.044708 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:58:16.044718 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:58:16.044728 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:58:16.044738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:58:16.044747 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:58:16.044756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:58:16.044766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:58:16.044775 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:58:16.044784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:58:16.044794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:58:16.044804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:58:16.044814 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:58:16.044824 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:58:16.044833 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:58:16.044845 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:58:16.044854 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:58:16.044864 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:58:16.044873 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:58:16.044884 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:58:16.044894 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:58:16.044903 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:58:16.044913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:58:16.044922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:58:16.044932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:58:16.044941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:58:16.044952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:58:16.044962 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:58:16.044972 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:58:16.044981 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:58:16.044991 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:58:16.045000 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:58:16.045012 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:58:16.045021 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:58:16.045031 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:58:16.045041 systemd[1]: Reached target machines.target - Containers. Jan 23 23:58:16.045051 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:58:16.045060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:58:16.045070 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:58:16.045080 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:58:16.045091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:58:16.045100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:58:16.045110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:58:16.045119 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:58:16.045129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:58:16.045139 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:58:16.045148 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:58:16.045158 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:58:16.045168 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:58:16.045178 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:58:16.045187 kernel: fuse: init (API version 7.39) Jan 23 23:58:16.045196 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:58:16.045205 kernel: loop: module loaded Jan 23 23:58:16.045215 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:58:16.045225 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:58:16.045247 systemd-journald[1297]: Collecting audit messages is disabled. Jan 23 23:58:16.045267 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:58:16.045278 systemd-journald[1297]: Journal started Jan 23 23:58:16.045298 systemd-journald[1297]: Runtime Journal (/run/log/journal/f81e5e5bdca045d8af5728e07dbb99f9) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:58:15.187958 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:58:15.330107 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:58:15.330485 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:58:15.330799 systemd[1]: systemd-journald.service: Consumed 2.454s CPU time. Jan 23 23:58:16.067672 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:58:16.074737 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:58:16.074771 systemd[1]: Stopped verity-setup.service. Jan 23 23:58:16.095770 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:58:16.091544 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:58:16.096182 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:58:16.104493 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:58:16.109009 kernel: ACPI: bus type drm_connector registered Jan 23 23:58:16.109460 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:58:16.116829 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:58:16.121895 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:58:16.126317 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:58:16.131668 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:58:16.137262 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:58:16.137500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:58:16.142953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:58:16.143076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:58:16.148249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:58:16.148376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:58:16.153270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:58:16.153384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:58:16.159055 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:58:16.159187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:58:16.164160 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:58:16.164279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:58:16.169518 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:58:16.175189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:58:16.180333 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:58:16.186031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:58:16.199847 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:58:16.211456 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:58:16.217188 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:58:16.222208 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:58:16.222239 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:58:16.227544 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:58:16.234428 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:58:16.242561 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:58:16.247783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:58:16.263608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:58:16.269475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:58:16.275271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:58:16.276132 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:58:16.281649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:58:16.282955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:58:16.288694 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:58:16.296633 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:58:16.306561 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:58:16.315497 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:58:16.320896 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:58:16.326531 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:58:16.333585 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:58:16.346050 kernel: loop0: detected capacity change from 0 to 211168 Jan 23 23:58:16.347805 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:58:16.359465 systemd-journald[1297]: Time spent on flushing to /var/log/journal/f81e5e5bdca045d8af5728e07dbb99f9 is 20.200ms for 898 entries. Jan 23 23:58:16.359465 systemd-journald[1297]: System Journal (/var/log/journal/f81e5e5bdca045d8af5728e07dbb99f9) is 8.0M, max 2.6G, 2.6G free. Jan 23 23:58:16.394970 systemd-journald[1297]: Received client request to flush runtime journal. Jan 23 23:58:16.366708 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:58:16.371990 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 23 23:58:16.397458 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:58:16.445412 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:58:16.449949 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:58:16.450627 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:58:16.485822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:58:16.500467 kernel: loop1: detected capacity change from 0 to 114328 Jan 23 23:58:16.552747 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:58:16.566631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:58:16.651939 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Jan 23 23:58:16.651953 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Jan 23 23:58:16.655975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:58:16.915417 kernel: loop2: detected capacity change from 0 to 31320 Jan 23 23:58:17.093426 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:58:17.103522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:58:17.128380 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jan 23 23:58:17.284579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:58:17.301555 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:58:17.349649 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:58:17.358782 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 23:58:17.375477 kernel: loop3: detected capacity change from 0 to 114432 Jan 23 23:58:17.426373 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:58:17.452237 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:58:17.474408 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:58:17.474860 kernel: hv_vmbus: registering driver hv_balloon Jan 23 23:58:17.482703 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 23:58:17.486043 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 23:58:17.510453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:17.529633 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 23:58:17.529712 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 23:58:17.535567 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 23:58:17.539840 kernel: Console: switching to colour dummy device 80x25 Jan 23 23:58:17.543119 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:58:17.551666 systemd-networkd[1363]: lo: Link UP Jan 23 23:58:17.552243 systemd-networkd[1363]: lo: Gained carrier Jan 23 23:58:17.558375 systemd-networkd[1363]: Enumeration completed Jan 23 23:58:17.558672 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:58:17.559466 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:17.559472 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:58:17.574687 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:58:17.584570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:58:17.584750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:17.597775 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1374) Jan 23 23:58:17.606726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:58:17.628505 kernel: mlx5_core 5bf5:00:02.0 enP23541s1: Link up Jan 23 23:58:17.654090 systemd-networkd[1363]: enP23541s1: Link UP Jan 23 23:58:17.654516 kernel: hv_netvsc 002248be-51da-0022-48be-51da002248be eth0: Data path switched to VF: enP23541s1 Jan 23 23:58:17.654898 systemd-networkd[1363]: eth0: Link UP Jan 23 23:58:17.654905 systemd-networkd[1363]: eth0: Gained carrier Jan 23 23:58:17.654920 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:17.656906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:58:17.663733 systemd-networkd[1363]: enP23541s1: Gained carrier Jan 23 23:58:17.668430 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:58:17.668630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:58:17.723611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:58:17.754325 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:58:17.766830 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:58:17.777427 kernel: loop4: detected capacity change from 0 to 211168 Jan 23 23:58:17.797474 kernel: loop5: detected capacity change from 0 to 114328 Jan 23 23:58:17.818416 kernel: loop6: detected capacity change from 0 to 31320 Jan 23 23:58:17.831921 kernel: loop7: detected capacity change from 0 to 114432 Jan 23 23:58:17.839724 (sd-merge)[1450]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 23:58:17.840141 (sd-merge)[1450]: Merged extensions into '/usr'. Jan 23 23:58:17.844173 systemd[1]: Reloading requested from client PID 1328 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:58:17.844281 systemd[1]: Reloading... Jan 23 23:58:17.865140 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:58:17.907468 zram_generator::config[1476]: No configuration found. Jan 23 23:58:18.033707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:58:18.108883 systemd[1]: Reloading finished in 264 ms. Jan 23 23:58:18.139023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:58:18.146051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:58:18.151688 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:58:18.160477 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:58:18.170530 systemd[1]: Starting ensure-sysext.service... Jan 23 23:58:18.174803 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:58:18.182591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:58:18.184936 lvm[1540]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:58:18.195488 systemd[1]: Reloading requested from client PID 1539 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:58:18.195502 systemd[1]: Reloading... Jan 23 23:58:18.222699 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:58:18.223253 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:58:18.224162 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:58:18.224575 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Jan 23 23:58:18.224706 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Jan 23 23:58:18.249152 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:58:18.249289 systemd-tmpfiles[1541]: Skipping /boot Jan 23 23:58:18.256216 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:58:18.256337 systemd-tmpfiles[1541]: Skipping /boot Jan 23 23:58:18.267411 zram_generator::config[1572]: No configuration found. Jan 23 23:58:18.368934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:58:18.444036 systemd[1]: Reloading finished in 248 ms. Jan 23 23:58:18.466802 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:58:18.472710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:58:18.491719 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:58:18.499472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:58:18.506778 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:58:18.516129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:58:18.528636 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:58:18.537688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:58:18.538788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:58:18.547058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:58:18.558103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:58:18.563109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:58:18.563898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:58:18.564310 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:58:18.571837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:58:18.571991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:58:18.579479 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:58:18.579601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:58:18.593719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:58:18.597717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:58:18.604756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:58:18.613664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:58:18.621820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:58:18.623145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:58:18.629248 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:58:18.634815 systemd-resolved[1635]: Positive Trust Anchors: Jan 23 23:58:18.634827 systemd-resolved[1635]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:58:18.634858 systemd-resolved[1635]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:58:18.635365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:58:18.635507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:58:18.640914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:58:18.641044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:58:18.647900 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:58:18.648015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:58:18.659231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:58:18.664618 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:58:18.668174 systemd-resolved[1635]: Using system hostname 'ci-4081.3.6-n-2a642b76b3'. Jan 23 23:58:18.671643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:58:18.676828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:58:18.685502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:58:18.692651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:58:18.692952 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:58:18.698622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:58:18.703932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:58:18.704082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:58:18.709307 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:58:18.709447 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:58:18.714539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:58:18.714659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:58:18.721279 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:58:18.721475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:58:18.730451 systemd[1]: Finished ensure-sysext.service. Jan 23 23:58:18.736220 systemd[1]: Reached target network.target - Network. Jan 23 23:58:18.737688 augenrules[1669]: No rules Jan 23 23:58:18.741158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:58:18.746445 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:58:18.746507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:58:18.746777 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:58:19.064645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:58:19.070638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:58:19.401656 systemd-networkd[1363]: eth0: Gained IPv6LL Jan 23 23:58:19.403905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:58:19.410018 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:58:22.156107 ldconfig[1323]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:58:22.169570 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:58:22.179527 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:58:22.191555 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:58:22.197812 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:58:22.202449 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:58:22.207840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:58:22.213299 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:58:22.217841 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:58:22.223164 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:58:22.228527 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:58:22.228556 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:58:22.232399 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:58:22.237138 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:58:22.243137 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:58:22.251876 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:58:22.256662 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:58:22.261263 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:58:22.265335 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:58:22.269299 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:58:22.269320 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:58:22.289470 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 23:58:22.294498 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:58:22.300542 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:58:22.309525 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:58:22.321227 (chronyd)[1687]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 23 23:58:22.322763 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:58:22.325972 jq[1691]: false Jan 23 23:58:22.328159 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:58:22.332467 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:58:22.332499 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 23 23:58:22.333548 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 23:58:22.338464 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 23:58:22.341066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:22.348057 chronyd[1699]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 23 23:58:22.357505 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:58:22.365618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:58:22.365978 KVP[1695]: KVP starting; pid is:1695 Jan 23 23:58:22.373505 chronyd[1699]: Timezone right/UTC failed leap second check, ignoring Jan 23 23:58:22.373879 chronyd[1699]: Loaded seccomp filter (level 2) Jan 23 23:58:22.376516 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:58:22.384769 KVP[1695]: KVP LIC Version: 3.1 Jan 23 23:58:22.385942 kernel: hv_utils: KVP IC version 4.0 Jan 23 23:58:22.385954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:58:22.394544 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:58:22.397469 extend-filesystems[1694]: Found loop4 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found loop5 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found loop6 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found loop7 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda1 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda2 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda3 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found usr Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda4 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda6 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda7 Jan 23 23:58:22.405486 extend-filesystems[1694]: Found sda9 Jan 23 23:58:22.405486 extend-filesystems[1694]: Checking size of /dev/sda9 Jan 23 23:58:22.421727 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:58:22.520467 extend-filesystems[1694]: Old size kept for /dev/sda9 Jan 23 23:58:22.520467 extend-filesystems[1694]: Found sr0 Jan 23 23:58:22.429793 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:58:22.540777 dbus-daemon[1690]: [system] SELinux support is enabled Jan 23 23:58:22.430244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:58:22.435261 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:58:22.573093 update_engine[1717]: I20260123 23:58:22.554414 1717 main.cc:92] Flatcar Update Engine starting Jan 23 23:58:22.573093 update_engine[1717]: I20260123 23:58:22.567360 1717 update_check_scheduler.cc:74] Next update check in 2m8s Jan 23 23:58:22.445654 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:58:22.579607 jq[1722]: true Jan 23 23:58:22.457303 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 23:58:22.463962 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:58:22.464769 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:58:22.469952 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:58:22.580058 jq[1735]: true Jan 23 23:58:22.470098 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:58:22.481700 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:58:22.481847 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:58:22.497440 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:58:22.519721 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:58:22.519898 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:58:22.544737 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:58:22.571049 (ntainerd)[1736]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:58:22.585240 systemd-logind[1714]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 23:58:22.585721 systemd-logind[1714]: New seat seat0. Jan 23 23:58:22.587469 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:58:22.601439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:58:22.607139 tar[1731]: linux-arm64/LICENSE Jan 23 23:58:22.607139 tar[1731]: linux-arm64/helm Jan 23 23:58:22.601470 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:58:22.608734 dbus-daemon[1690]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:58:22.610761 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:58:22.610781 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:58:22.617179 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:58:22.633610 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:58:22.678465 coreos-metadata[1689]: Jan 23 23:58:22.678 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:58:22.680533 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1742) Jan 23 23:58:22.682597 coreos-metadata[1689]: Jan 23 23:58:22.682 INFO Fetch successful Jan 23 23:58:22.682597 coreos-metadata[1689]: Jan 23 23:58:22.682 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 23:58:22.687707 coreos-metadata[1689]: Jan 23 23:58:22.687 INFO Fetch successful Jan 23 23:58:22.687707 coreos-metadata[1689]: Jan 23 23:58:22.687 INFO Fetching http://168.63.129.16/machine/81191515-c2cd-4a20-ba8f-dc1fbd9a6f0c/a01c83c3%2Dfeea%2D47e8%2Da5ac%2D9f6391e8748a.%5Fci%2D4081.3.6%2Dn%2D2a642b76b3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 23:58:22.698276 coreos-metadata[1689]: Jan 23 23:58:22.698 INFO Fetch successful Jan 23 23:58:22.698276 coreos-metadata[1689]: Jan 23 23:58:22.698 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:58:22.711006 coreos-metadata[1689]: Jan 23 23:58:22.710 INFO Fetch successful Jan 23 23:58:22.734017 bash[1780]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:58:22.735555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:58:22.770713 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 23:58:22.772536 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:58:22.787697 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:58:22.793242 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:58:23.320921 containerd[1736]: time="2026-01-23T23:58:23.319335800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:58:23.336375 tar[1731]: linux-arm64/README.md Jan 23 23:58:23.348693 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:58:23.373448 containerd[1736]: time="2026-01-23T23:58:23.373376160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.374723 containerd[1736]: time="2026-01-23T23:58:23.374683600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:58:23.374723 containerd[1736]: time="2026-01-23T23:58:23.374720120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:58:23.374823 containerd[1736]: time="2026-01-23T23:58:23.374738440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:58:23.374900 containerd[1736]: time="2026-01-23T23:58:23.374883120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:58:23.374930 containerd[1736]: time="2026-01-23T23:58:23.374901880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.374976 containerd[1736]: time="2026-01-23T23:58:23.374961480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375008 containerd[1736]: time="2026-01-23T23:58:23.374975120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375163 containerd[1736]: time="2026-01-23T23:58:23.375138080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375192 containerd[1736]: time="2026-01-23T23:58:23.375161760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375192 containerd[1736]: time="2026-01-23T23:58:23.375175160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375192 containerd[1736]: time="2026-01-23T23:58:23.375184840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375268 containerd[1736]: time="2026-01-23T23:58:23.375253520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375466 containerd[1736]: time="2026-01-23T23:58:23.375449160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375562 containerd[1736]: time="2026-01-23T23:58:23.375545000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:58:23.375586 containerd[1736]: time="2026-01-23T23:58:23.375560360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:58:23.375650 containerd[1736]: time="2026-01-23T23:58:23.375635040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:58:23.375692 containerd[1736]: time="2026-01-23T23:58:23.375680400Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:58:23.389821 containerd[1736]: time="2026-01-23T23:58:23.389788920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:58:23.389883 containerd[1736]: time="2026-01-23T23:58:23.389843280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:58:23.389883 containerd[1736]: time="2026-01-23T23:58:23.389863880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:58:23.389883 containerd[1736]: time="2026-01-23T23:58:23.389879640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:58:23.389953 containerd[1736]: time="2026-01-23T23:58:23.389928040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:58:23.390080 containerd[1736]: time="2026-01-23T23:58:23.390061240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:58:23.391188 containerd[1736]: time="2026-01-23T23:58:23.391166360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:58:23.391314 containerd[1736]: time="2026-01-23T23:58:23.391297560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:58:23.391339 containerd[1736]: time="2026-01-23T23:58:23.391318640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:58:23.391339 containerd[1736]: time="2026-01-23T23:58:23.391332760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:58:23.391396 containerd[1736]: time="2026-01-23T23:58:23.391346840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391396 containerd[1736]: time="2026-01-23T23:58:23.391360240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391396 containerd[1736]: time="2026-01-23T23:58:23.391372360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391463 containerd[1736]: time="2026-01-23T23:58:23.391401240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391463 containerd[1736]: time="2026-01-23T23:58:23.391417480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391463 containerd[1736]: time="2026-01-23T23:58:23.391429680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391463 containerd[1736]: time="2026-01-23T23:58:23.391443120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391463 containerd[1736]: time="2026-01-23T23:58:23.391454640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:58:23.391548 containerd[1736]: time="2026-01-23T23:58:23.391474800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391548 containerd[1736]: time="2026-01-23T23:58:23.391494760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391548 containerd[1736]: time="2026-01-23T23:58:23.391509520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391548 containerd[1736]: time="2026-01-23T23:58:23.391521720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391548 containerd[1736]: time="2026-01-23T23:58:23.391533360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391548 containerd[1736]: time="2026-01-23T23:58:23.391545720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391557000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391568920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391580360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391595600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391606720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391618320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391630680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391649840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391670680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391688 containerd[1736]: time="2026-01-23T23:58:23.391681880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.391852 containerd[1736]: time="2026-01-23T23:58:23.391692160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392216480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392248960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392323200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392337280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392346480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392359800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392369400Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:58:23.393397 containerd[1736]: time="2026-01-23T23:58:23.392379360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:58:23.393554 containerd[1736]: time="2026-01-23T23:58:23.392696160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:58:23.393554 containerd[1736]: time="2026-01-23T23:58:23.392754600Z" level=info msg="Connect containerd service" Jan 23 23:58:23.393554 containerd[1736]: time="2026-01-23T23:58:23.392778960Z" level=info msg="using legacy CRI server" Jan 23 23:58:23.393554 containerd[1736]: time="2026-01-23T23:58:23.392785200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:58:23.393554 containerd[1736]: time="2026-01-23T23:58:23.392860120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:58:23.395803 containerd[1736]: time="2026-01-23T23:58:23.395769120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:58:23.396060 containerd[1736]: time="2026-01-23T23:58:23.396042680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:58:23.396093 containerd[1736]: time="2026-01-23T23:58:23.396084080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:58:23.396369 containerd[1736]: time="2026-01-23T23:58:23.396341560Z" level=info msg="Start subscribing containerd event" Jan 23 23:58:23.396411 containerd[1736]: time="2026-01-23T23:58:23.396386480Z" level=info msg="Start recovering state" Jan 23 23:58:23.396476 containerd[1736]: time="2026-01-23T23:58:23.396461520Z" level=info msg="Start event monitor" Jan 23 23:58:23.396512 containerd[1736]: time="2026-01-23T23:58:23.396475960Z" level=info msg="Start snapshots syncer" Jan 23 23:58:23.396512 containerd[1736]: time="2026-01-23T23:58:23.396488120Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:58:23.396512 containerd[1736]: time="2026-01-23T23:58:23.396499120Z" level=info msg="Start streaming server" Jan 23 23:58:23.396632 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:58:23.402141 containerd[1736]: time="2026-01-23T23:58:23.402108200Z" level=info msg="containerd successfully booted in 0.085807s" Jan 23 23:58:23.577553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:23.586498 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:23.815548 sshd_keygen[1720]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:58:23.839461 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:58:23.849703 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:58:23.857469 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 23:58:23.864851 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:58:23.865477 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:58:23.875626 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:58:23.889963 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:58:23.897564 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:58:23.910773 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:58:23.916271 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:58:23.926636 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 23:58:23.933372 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:58:23.940178 systemd[1]: Startup finished in 594ms (kernel) + 11.581s (initrd) + 11.401s (userspace) = 23.577s. Jan 23 23:58:24.042011 kubelet[1830]: E0123 23:58:24.041960 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:24.044641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:24.044781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:24.272324 login[1855]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 23 23:58:24.272821 login[1854]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:24.279876 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:58:24.286678 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:58:24.288595 systemd-logind[1714]: New session 1 of user core. Jan 23 23:58:24.310859 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:58:24.315637 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:58:24.333796 (systemd)[1867]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:58:24.479915 systemd[1867]: Queued start job for default target default.target. Jan 23 23:58:24.485245 systemd[1867]: Created slice app.slice - User Application Slice. Jan 23 23:58:24.485271 systemd[1867]: Reached target paths.target - Paths. Jan 23 23:58:24.485283 systemd[1867]: Reached target timers.target - Timers. Jan 23 23:58:24.486398 systemd[1867]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:58:24.495511 systemd[1867]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:58:24.495559 systemd[1867]: Reached target sockets.target - Sockets. Jan 23 23:58:24.495570 systemd[1867]: Reached target basic.target - Basic System. Jan 23 23:58:24.495610 systemd[1867]: Reached target default.target - Main User Target. Jan 23 23:58:24.495633 systemd[1867]: Startup finished in 156ms. Jan 23 23:58:24.495881 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:58:24.499564 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:58:25.273816 login[1855]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:25.278117 systemd-logind[1714]: New session 2 of user core. Jan 23 23:58:25.284505 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:58:25.646263 waagent[1857]: 2026-01-23T23:58:25.646180Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 23 23:58:25.650808 waagent[1857]: 2026-01-23T23:58:25.650758Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 23 23:58:25.654341 waagent[1857]: 2026-01-23T23:58:25.654303Z INFO Daemon Daemon Python: 3.11.9 Jan 23 23:58:25.657762 waagent[1857]: 2026-01-23T23:58:25.657718Z INFO Daemon Daemon Run daemon Jan 23 23:58:25.660905 waagent[1857]: 2026-01-23T23:58:25.660866Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 23 23:58:25.668003 waagent[1857]: 2026-01-23T23:58:25.667952Z INFO Daemon Daemon Using waagent for provisioning Jan 23 23:58:25.672162 waagent[1857]: 2026-01-23T23:58:25.672125Z INFO Daemon Daemon Activate resource disk Jan 23 23:58:25.675800 waagent[1857]: 2026-01-23T23:58:25.675760Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 23:58:25.685431 waagent[1857]: 2026-01-23T23:58:25.685371Z INFO Daemon Daemon Found device: None Jan 23 23:58:25.689074 waagent[1857]: 2026-01-23T23:58:25.689035Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 23:58:25.695565 waagent[1857]: 2026-01-23T23:58:25.695521Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 23:58:25.706002 waagent[1857]: 2026-01-23T23:58:25.705953Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:58:25.710564 waagent[1857]: 2026-01-23T23:58:25.710526Z INFO Daemon Daemon Running default provisioning handler Jan 23 23:58:25.721373 waagent[1857]: 2026-01-23T23:58:25.721322Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 23:58:25.732598 waagent[1857]: 2026-01-23T23:58:25.732550Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 23:58:25.740163 waagent[1857]: 2026-01-23T23:58:25.740126Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 23:58:25.744019 waagent[1857]: 2026-01-23T23:58:25.743987Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 23:58:25.875717 waagent[1857]: 2026-01-23T23:58:25.875622Z INFO Daemon Daemon Successfully mounted dvd Jan 23 23:58:25.906040 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 23:58:25.909452 waagent[1857]: 2026-01-23T23:58:25.908270Z INFO Daemon Daemon Detect protocol endpoint Jan 23 23:58:25.912437 waagent[1857]: 2026-01-23T23:58:25.912394Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:58:25.916871 waagent[1857]: 2026-01-23T23:58:25.916835Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 23:58:25.922224 waagent[1857]: 2026-01-23T23:58:25.922190Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 23:58:25.926381 waagent[1857]: 2026-01-23T23:58:25.926339Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 23:58:25.930463 waagent[1857]: 2026-01-23T23:58:25.930421Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 23:58:25.984950 waagent[1857]: 2026-01-23T23:58:25.984906Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 23:58:25.990235 waagent[1857]: 2026-01-23T23:58:25.990211Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 23:58:25.994376 waagent[1857]: 2026-01-23T23:58:25.994338Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 23:58:26.199133 waagent[1857]: 2026-01-23T23:58:26.198986Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 23:58:26.204267 waagent[1857]: 2026-01-23T23:58:26.204223Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 23:58:26.212091 waagent[1857]: 2026-01-23T23:58:26.212048Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:58:26.231302 waagent[1857]: 2026-01-23T23:58:26.231263Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 23:58:26.235995 waagent[1857]: 2026-01-23T23:58:26.235956Z INFO Daemon Jan 23 23:58:26.238270 waagent[1857]: 2026-01-23T23:58:26.238227Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3ec86a3e-88f9-4410-9ec4-db17324a3757 eTag: 5199866962538176 source: Fabric] Jan 23 23:58:26.247024 waagent[1857]: 2026-01-23T23:58:26.246986Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 23:58:26.252464 waagent[1857]: 2026-01-23T23:58:26.252424Z INFO Daemon Jan 23 23:58:26.254630 waagent[1857]: 2026-01-23T23:58:26.254597Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:58:26.263485 waagent[1857]: 2026-01-23T23:58:26.263453Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 23:58:26.403436 waagent[1857]: 2026-01-23T23:58:26.402902Z INFO Daemon Downloaded certificate {'thumbprint': '0F7584AD90050436922E9E1CCE76AC317F443CF2', 'hasPrivateKey': True} Jan 23 23:58:26.411285 waagent[1857]: 2026-01-23T23:58:26.411242Z INFO Daemon Fetch goal state completed Jan 23 23:58:26.452105 waagent[1857]: 2026-01-23T23:58:26.452026Z INFO Daemon Daemon Starting provisioning Jan 23 23:58:26.456087 waagent[1857]: 2026-01-23T23:58:26.456042Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 23:58:26.459786 waagent[1857]: 2026-01-23T23:58:26.459745Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-2a642b76b3] Jan 23 23:58:26.484421 waagent[1857]: 2026-01-23T23:58:26.483592Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-2a642b76b3] Jan 23 23:58:26.488856 waagent[1857]: 2026-01-23T23:58:26.488807Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 23:58:26.493780 waagent[1857]: 2026-01-23T23:58:26.493740Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 23:58:26.537962 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:58:26.537968 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:58:26.538010 systemd-networkd[1363]: eth0: DHCP lease lost Jan 23 23:58:26.539027 waagent[1857]: 2026-01-23T23:58:26.538923Z INFO Daemon Daemon Create user account if not exists Jan 23 23:58:26.543438 systemd-networkd[1363]: eth0: DHCPv6 lease lost Jan 23 23:58:26.547786 waagent[1857]: 2026-01-23T23:58:26.543487Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 23:58:26.547941 waagent[1857]: 2026-01-23T23:58:26.547860Z INFO Daemon Daemon Configure sudoer Jan 23 23:58:26.551645 waagent[1857]: 2026-01-23T23:58:26.551600Z INFO Daemon Daemon Configure sshd Jan 23 23:58:26.555088 waagent[1857]: 2026-01-23T23:58:26.555040Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 23:58:26.564651 waagent[1857]: 2026-01-23T23:58:26.564613Z INFO Daemon Daemon Deploy ssh public key. Jan 23 23:58:26.577505 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:58:27.678631 waagent[1857]: 2026-01-23T23:58:27.678583Z INFO Daemon Daemon Provisioning complete Jan 23 23:58:27.694775 waagent[1857]: 2026-01-23T23:58:27.694730Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 23:58:27.699531 waagent[1857]: 2026-01-23T23:58:27.699488Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 23:58:27.706919 waagent[1857]: 2026-01-23T23:58:27.706885Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 23 23:58:27.832028 waagent[1917]: 2026-01-23T23:58:27.831376Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 23 23:58:27.832028 waagent[1917]: 2026-01-23T23:58:27.831536Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 23 23:58:27.832028 waagent[1917]: 2026-01-23T23:58:27.831590Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 23 23:58:28.290422 waagent[1917]: 2026-01-23T23:58:28.289949Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 23 23:58:28.290422 waagent[1917]: 2026-01-23T23:58:28.290197Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:58:28.290422 waagent[1917]: 2026-01-23T23:58:28.290261Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:58:28.298004 waagent[1917]: 2026-01-23T23:58:28.297948Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:58:28.303112 waagent[1917]: 2026-01-23T23:58:28.303076Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 23:58:28.303572 waagent[1917]: 2026-01-23T23:58:28.303531Z INFO ExtHandler Jan 23 23:58:28.303642 waagent[1917]: 2026-01-23T23:58:28.303616Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 68f73190-11e7-4f05-a8d3-5ccef11731dd eTag: 5199866962538176 source: Fabric] Jan 23 23:58:28.303937 waagent[1917]: 2026-01-23T23:58:28.303899Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:58:28.322014 waagent[1917]: 2026-01-23T23:58:28.321835Z INFO ExtHandler Jan 23 23:58:28.322108 waagent[1917]: 2026-01-23T23:58:28.322017Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:58:28.326158 waagent[1917]: 2026-01-23T23:58:28.326122Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:58:28.434901 waagent[1917]: 2026-01-23T23:58:28.433482Z INFO ExtHandler Downloaded certificate {'thumbprint': '0F7584AD90050436922E9E1CCE76AC317F443CF2', 'hasPrivateKey': True} Jan 23 23:58:28.434901 waagent[1917]: 2026-01-23T23:58:28.434026Z INFO ExtHandler Fetch goal state completed Jan 23 23:58:28.449413 waagent[1917]: 2026-01-23T23:58:28.448442Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1917 Jan 23 23:58:28.449413 waagent[1917]: 2026-01-23T23:58:28.448591Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 23:58:28.450331 waagent[1917]: 2026-01-23T23:58:28.450290Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 23:58:28.450776 waagent[1917]: 2026-01-23T23:58:28.450738Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 23:58:28.508137 waagent[1917]: 2026-01-23T23:58:28.508101Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 23:58:28.508463 waagent[1917]: 2026-01-23T23:58:28.508420Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 23:58:28.514892 waagent[1917]: 2026-01-23T23:58:28.514862Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 23:58:28.520784 systemd[1]: Reloading requested from client PID 1932 ('systemctl') (unit waagent.service)... Jan 23 23:58:28.520798 systemd[1]: Reloading... Jan 23 23:58:28.597435 zram_generator::config[1966]: No configuration found. Jan 23 23:58:28.689285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:58:28.763288 systemd[1]: Reloading finished in 242 ms. Jan 23 23:58:28.790451 waagent[1917]: 2026-01-23T23:58:28.789402Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 23 23:58:28.794908 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit waagent.service)... Jan 23 23:58:28.794920 systemd[1]: Reloading... Jan 23 23:58:28.869605 zram_generator::config[2063]: No configuration found. Jan 23 23:58:28.959958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:58:29.034287 systemd[1]: Reloading finished in 239 ms. Jan 23 23:58:29.053434 waagent[1917]: 2026-01-23T23:58:29.052663Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 23:58:29.053434 waagent[1917]: 2026-01-23T23:58:29.052825Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 23:58:29.416417 waagent[1917]: 2026-01-23T23:58:29.415835Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 23:58:29.416516 waagent[1917]: 2026-01-23T23:58:29.416435Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 23 23:58:29.417224 waagent[1917]: 2026-01-23T23:58:29.417150Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 23:58:29.417657 waagent[1917]: 2026-01-23T23:58:29.417512Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 23:58:29.417950 waagent[1917]: 2026-01-23T23:58:29.417907Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:58:29.418835 waagent[1917]: 2026-01-23T23:58:29.418036Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:58:29.418835 waagent[1917]: 2026-01-23T23:58:29.418119Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:58:29.418835 waagent[1917]: 2026-01-23T23:58:29.418255Z INFO EnvHandler ExtHandler Configure routes Jan 23 23:58:29.418835 waagent[1917]: 2026-01-23T23:58:29.418312Z INFO EnvHandler ExtHandler Gateway:None Jan 23 23:58:29.418835 waagent[1917]: 2026-01-23T23:58:29.418352Z INFO EnvHandler ExtHandler Routes:None Jan 23 23:58:29.419115 waagent[1917]: 2026-01-23T23:58:29.419070Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:58:29.419410 waagent[1917]: 2026-01-23T23:58:29.419356Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 23:58:29.419694 waagent[1917]: 2026-01-23T23:58:29.419648Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 23:58:29.419694 waagent[1917]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 23:58:29.419694 waagent[1917]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 23:58:29.419694 waagent[1917]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 23:58:29.419694 waagent[1917]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:58:29.419694 waagent[1917]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:58:29.419694 waagent[1917]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:58:29.420536 waagent[1917]: 2026-01-23T23:58:29.420480Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 23:58:29.420742 waagent[1917]: 2026-01-23T23:58:29.420704Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 23:58:29.421075 waagent[1917]: 2026-01-23T23:58:29.421031Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 23:58:29.421196 waagent[1917]: 2026-01-23T23:58:29.421159Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 23:58:29.421750 waagent[1917]: 2026-01-23T23:58:29.421711Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 23:58:29.430268 waagent[1917]: 2026-01-23T23:58:29.430224Z INFO ExtHandler ExtHandler Jan 23 23:58:29.430654 waagent[1917]: 2026-01-23T23:58:29.430616Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7dad70c2-4188-468d-990e-e1f11ed56f6a correlation ee4d4028-1063-4040-9988-23590b4ea227 created: 2026-01-23T23:57:32.119064Z] Jan 23 23:58:29.431073 waagent[1917]: 2026-01-23T23:58:29.431034Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:58:29.431747 waagent[1917]: 2026-01-23T23:58:29.431708Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 23:58:29.463228 waagent[1917]: 2026-01-23T23:58:29.463170Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C49D361A-8F4D-467B-B3D0-70E4993D1B48;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 23 23:58:29.493581 waagent[1917]: 2026-01-23T23:58:29.493127Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 23:58:29.493581 waagent[1917]: Executing ['ip', '-a', '-o', 'link']: Jan 23 23:58:29.493581 waagent[1917]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 23:58:29.493581 waagent[1917]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:51:da brd ff:ff:ff:ff:ff:ff Jan 23 23:58:29.493581 waagent[1917]: 3: enP23541s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:51:da brd ff:ff:ff:ff:ff:ff\ altname enP23541p0s2 Jan 23 23:58:29.493581 waagent[1917]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 23:58:29.493581 waagent[1917]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 23:58:29.493581 waagent[1917]: 2: eth0 inet 10.200.20.20/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 23:58:29.493581 waagent[1917]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 23:58:29.493581 waagent[1917]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 23:58:29.493581 waagent[1917]: 2: eth0 inet6 fe80::222:48ff:febe:51da/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 23:58:29.649948 waagent[1917]: 2026-01-23T23:58:29.649881Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 23 23:58:29.649948 waagent[1917]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:58:29.649948 waagent[1917]: pkts bytes target prot opt in out source destination Jan 23 23:58:29.649948 waagent[1917]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:58:29.649948 waagent[1917]: pkts bytes target prot opt in out source destination Jan 23 23:58:29.649948 waagent[1917]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:58:29.649948 waagent[1917]: pkts bytes target prot opt in out source destination Jan 23 23:58:29.649948 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:58:29.649948 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:58:29.649948 waagent[1917]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:58:29.652729 waagent[1917]: 2026-01-23T23:58:29.652676Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 23:58:29.652729 waagent[1917]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:58:29.652729 waagent[1917]: pkts bytes target prot opt in out source destination Jan 23 23:58:29.652729 waagent[1917]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:58:29.652729 waagent[1917]: pkts bytes target prot opt in out source destination Jan 23 23:58:29.652729 waagent[1917]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:58:29.652729 waagent[1917]: pkts bytes target prot opt in out source destination Jan 23 23:58:29.652729 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:58:29.652729 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:58:29.652729 waagent[1917]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:58:29.652958 waagent[1917]: 2026-01-23T23:58:29.652925Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 23:58:34.081985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:58:34.090544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:34.192838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:34.196268 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:34.277502 kubelet[2147]: E0123 23:58:34.277461 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:34.281050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:34.281186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:44.332060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:58:44.341627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:44.433053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:44.436319 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:44.544115 kubelet[2162]: E0123 23:58:44.544050 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:44.546994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:44.547123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:46.159742 chronyd[1699]: Selected source PHC0 Jan 23 23:58:48.269875 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:58:48.271324 systemd[1]: Started sshd@0-10.200.20.20:22-10.200.16.10:52940.service - OpenSSH per-connection server daemon (10.200.16.10:52940). Jan 23 23:58:48.760955 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 52940 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:48.762228 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:48.765816 systemd-logind[1714]: New session 3 of user core. Jan 23 23:58:48.775567 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:58:49.176646 systemd[1]: Started sshd@1-10.200.20.20:22-10.200.16.10:52946.service - OpenSSH per-connection server daemon (10.200.16.10:52946). Jan 23 23:58:49.668823 sshd[2174]: Accepted publickey for core from 10.200.16.10 port 52946 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:49.670074 sshd[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:49.674574 systemd-logind[1714]: New session 4 of user core. Jan 23 23:58:49.680547 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:58:50.020464 sshd[2174]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:50.023459 systemd[1]: sshd@1-10.200.20.20:22-10.200.16.10:52946.service: Deactivated successfully. Jan 23 23:58:50.024920 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:58:50.026589 systemd-logind[1714]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:58:50.027350 systemd-logind[1714]: Removed session 4. Jan 23 23:58:50.106081 systemd[1]: Started sshd@2-10.200.20.20:22-10.200.16.10:42340.service - OpenSSH per-connection server daemon (10.200.16.10:42340). Jan 23 23:58:50.554212 sshd[2181]: Accepted publickey for core from 10.200.16.10 port 42340 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:50.555499 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:50.558892 systemd-logind[1714]: New session 5 of user core. Jan 23 23:58:50.568585 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:58:50.883473 sshd[2181]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:50.886592 systemd-logind[1714]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:58:50.887520 systemd[1]: sshd@2-10.200.20.20:22-10.200.16.10:42340.service: Deactivated successfully. Jan 23 23:58:50.888971 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:58:50.891005 systemd-logind[1714]: Removed session 5. Jan 23 23:58:50.963545 systemd[1]: Started sshd@3-10.200.20.20:22-10.200.16.10:42344.service - OpenSSH per-connection server daemon (10.200.16.10:42344). Jan 23 23:58:51.408575 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 42344 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:51.409827 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:51.413374 systemd-logind[1714]: New session 6 of user core. Jan 23 23:58:51.421528 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:58:51.738765 sshd[2188]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:51.741260 systemd-logind[1714]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:58:51.741849 systemd[1]: sshd@3-10.200.20.20:22-10.200.16.10:42344.service: Deactivated successfully. Jan 23 23:58:51.743470 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:58:51.744888 systemd-logind[1714]: Removed session 6. Jan 23 23:58:51.815308 systemd[1]: Started sshd@4-10.200.20.20:22-10.200.16.10:42360.service - OpenSSH per-connection server daemon (10.200.16.10:42360). Jan 23 23:58:52.268329 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 42360 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:52.269599 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:52.273159 systemd-logind[1714]: New session 7 of user core. Jan 23 23:58:52.284512 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:58:52.652792 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:58:52.653051 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:52.664033 sudo[2198]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:52.735699 sshd[2195]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:52.739288 systemd[1]: sshd@4-10.200.20.20:22-10.200.16.10:42360.service: Deactivated successfully. Jan 23 23:58:52.740933 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:58:52.741739 systemd-logind[1714]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:58:52.742747 systemd-logind[1714]: Removed session 7. Jan 23 23:58:52.829597 systemd[1]: Started sshd@5-10.200.20.20:22-10.200.16.10:42364.service - OpenSSH per-connection server daemon (10.200.16.10:42364). Jan 23 23:58:53.316590 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 42364 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:53.317904 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:53.322402 systemd-logind[1714]: New session 8 of user core. Jan 23 23:58:53.328537 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:58:53.591107 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:58:53.591675 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:53.594650 sudo[2207]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:53.598685 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:58:53.598924 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:53.610938 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:58:53.611752 auditctl[2210]: No rules Jan 23 23:58:53.612178 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:58:53.612432 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:58:53.614879 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:58:53.635069 augenrules[2228]: No rules Jan 23 23:58:53.636291 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:58:53.637545 sudo[2206]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:53.715424 sshd[2203]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:53.718660 systemd[1]: sshd@5-10.200.20.20:22-10.200.16.10:42364.service: Deactivated successfully. Jan 23 23:58:53.720018 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:58:53.720640 systemd-logind[1714]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:58:53.721518 systemd-logind[1714]: Removed session 8. Jan 23 23:58:53.796477 systemd[1]: Started sshd@6-10.200.20.20:22-10.200.16.10:42368.service - OpenSSH per-connection server daemon (10.200.16.10:42368). Jan 23 23:58:54.248093 sshd[2236]: Accepted publickey for core from 10.200.16.10 port 42368 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:54.249336 sshd[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:54.252745 systemd-logind[1714]: New session 9 of user core. Jan 23 23:58:54.259732 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:58:54.505660 sudo[2239]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:58:54.505910 sudo[2239]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:54.581887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:58:54.590622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:54.689580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:54.692915 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:54.724039 kubelet[2252]: E0123 23:58:54.723990 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:54.726298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:54.726431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:55.916610 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:58:55.916745 (dockerd)[2268]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:58:56.508824 dockerd[2268]: time="2026-01-23T23:58:56.508773372Z" level=info msg="Starting up" Jan 23 23:58:56.861597 dockerd[2268]: time="2026-01-23T23:58:56.861558856Z" level=info msg="Loading containers: start." Jan 23 23:58:56.998455 kernel: Initializing XFRM netlink socket Jan 23 23:58:57.493065 systemd-networkd[1363]: docker0: Link UP Jan 23 23:58:57.520410 dockerd[2268]: time="2026-01-23T23:58:57.520367693Z" level=info msg="Loading containers: done." Jan 23 23:58:57.545782 dockerd[2268]: time="2026-01-23T23:58:57.545740499Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:58:57.545923 dockerd[2268]: time="2026-01-23T23:58:57.545840619Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:58:57.545962 dockerd[2268]: time="2026-01-23T23:58:57.545944019Z" level=info msg="Daemon has completed initialization" Jan 23 23:58:57.602165 dockerd[2268]: time="2026-01-23T23:58:57.602104872Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:58:57.603410 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:58:58.497288 containerd[1736]: time="2026-01-23T23:58:58.497250366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 23:58:59.356252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501304008.mount: Deactivated successfully. Jan 23 23:59:00.555421 containerd[1736]: time="2026-01-23T23:59:00.554588297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:00.558441 containerd[1736]: time="2026-01-23T23:59:00.558414258Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 23:59:00.561801 containerd[1736]: time="2026-01-23T23:59:00.561774979Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:00.567133 containerd[1736]: time="2026-01-23T23:59:00.567086700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:00.568722 containerd[1736]: time="2026-01-23T23:59:00.568223100Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.070934014s" Jan 23 23:59:00.568722 containerd[1736]: time="2026-01-23T23:59:00.568257460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 23:59:00.569652 containerd[1736]: time="2026-01-23T23:59:00.569619021Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 23:59:01.732436 containerd[1736]: time="2026-01-23T23:59:01.732001738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:01.737410 containerd[1736]: time="2026-01-23T23:59:01.737172299Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 23:59:01.740639 containerd[1736]: time="2026-01-23T23:59:01.740591900Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:01.748204 containerd[1736]: time="2026-01-23T23:59:01.748142262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:01.749873 containerd[1736]: time="2026-01-23T23:59:01.749533582Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.179585361s" Jan 23 23:59:01.749873 containerd[1736]: time="2026-01-23T23:59:01.749571622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 23:59:01.750589 containerd[1736]: time="2026-01-23T23:59:01.750472463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 23:59:02.824428 containerd[1736]: time="2026-01-23T23:59:02.823941872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:02.827576 containerd[1736]: time="2026-01-23T23:59:02.827537996Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 23:59:02.830813 containerd[1736]: time="2026-01-23T23:59:02.830772240Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:02.835245 containerd[1736]: time="2026-01-23T23:59:02.835202725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:02.836367 containerd[1736]: time="2026-01-23T23:59:02.836259486Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.083153303s" Jan 23 23:59:02.836367 containerd[1736]: time="2026-01-23T23:59:02.836292326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 23:59:02.837472 containerd[1736]: time="2026-01-23T23:59:02.837448967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 23:59:03.776747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659158806.mount: Deactivated successfully. Jan 23 23:59:04.117730 containerd[1736]: time="2026-01-23T23:59:04.117688451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:04.120442 containerd[1736]: time="2026-01-23T23:59:04.120417095Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 23:59:04.122984 containerd[1736]: time="2026-01-23T23:59:04.122960578Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:04.126572 containerd[1736]: time="2026-01-23T23:59:04.126526862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:04.127329 containerd[1736]: time="2026-01-23T23:59:04.127001982Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.289523135s" Jan 23 23:59:04.127329 containerd[1736]: time="2026-01-23T23:59:04.127033502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 23:59:04.127824 containerd[1736]: time="2026-01-23T23:59:04.127658343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 23:59:04.828938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 23:59:04.835565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:04.866265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049449337.mount: Deactivated successfully. Jan 23 23:59:04.942095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:04.945363 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:59:05.028101 kubelet[2490]: E0123 23:59:05.028049 2490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:59:05.031002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:59:05.031154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:59:05.596217 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 23:59:06.916092 containerd[1736]: time="2026-01-23T23:59:06.916049415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:06.918718 containerd[1736]: time="2026-01-23T23:59:06.918676818Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 23:59:06.921733 containerd[1736]: time="2026-01-23T23:59:06.921694262Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:06.927748 containerd[1736]: time="2026-01-23T23:59:06.927718749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:06.929269 containerd[1736]: time="2026-01-23T23:59:06.928446070Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.800753847s" Jan 23 23:59:06.929269 containerd[1736]: time="2026-01-23T23:59:06.928474070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 23:59:06.929694 containerd[1736]: time="2026-01-23T23:59:06.929676391Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:59:07.372422 update_engine[1717]: I20260123 23:59:07.372082 1717 update_attempter.cc:509] Updating boot flags... Jan 23 23:59:07.413418 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2557) Jan 23 23:59:07.485977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017184639.mount: Deactivated successfully. Jan 23 23:59:07.499621 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2556) Jan 23 23:59:07.512623 containerd[1736]: time="2026-01-23T23:59:07.512581267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:07.516778 containerd[1736]: time="2026-01-23T23:59:07.516740031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:59:07.518956 containerd[1736]: time="2026-01-23T23:59:07.518905834Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:07.525796 containerd[1736]: time="2026-01-23T23:59:07.525746162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:07.527070 containerd[1736]: time="2026-01-23T23:59:07.527041803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 597.280652ms" Jan 23 23:59:07.527185 containerd[1736]: time="2026-01-23T23:59:07.527168524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:59:07.530048 containerd[1736]: time="2026-01-23T23:59:07.530017607Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 23:59:08.272461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974594566.mount: Deactivated successfully. Jan 23 23:59:11.538674 containerd[1736]: time="2026-01-23T23:59:11.538620847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:11.540911 containerd[1736]: time="2026-01-23T23:59:11.540883448Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 23:59:11.543929 containerd[1736]: time="2026-01-23T23:59:11.543885130Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:11.549733 containerd[1736]: time="2026-01-23T23:59:11.549692054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:11.551036 containerd[1736]: time="2026-01-23T23:59:11.550920335Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.020868288s" Jan 23 23:59:11.551036 containerd[1736]: time="2026-01-23T23:59:11.550952215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 23:59:15.081934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 23:59:15.089755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:15.188271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:15.190655 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:59:15.296877 kubelet[2698]: E0123 23:59:15.294328 2698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:59:15.297796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:59:15.297924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:59:16.848343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:16.853590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:16.880534 systemd[1]: Reloading requested from client PID 2713 ('systemctl') (unit session-9.scope)... Jan 23 23:59:16.880551 systemd[1]: Reloading... Jan 23 23:59:16.974717 zram_generator::config[2750]: No configuration found. Jan 23 23:59:17.079932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:59:17.156608 systemd[1]: Reloading finished in 275 ms. Jan 23 23:59:17.202716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:17.205298 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:17.207727 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:59:17.207928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:17.211590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:17.456542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:17.469883 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:59:17.499025 kubelet[2822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:59:17.499025 kubelet[2822]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:59:17.499025 kubelet[2822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:59:17.499025 kubelet[2822]: I0123 23:59:17.498528 2822 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:59:18.329759 kubelet[2822]: I0123 23:59:18.329724 2822 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:59:18.329759 kubelet[2822]: I0123 23:59:18.329750 2822 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:59:18.329997 kubelet[2822]: I0123 23:59:18.329978 2822 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:59:18.354953 kubelet[2822]: E0123 23:59:18.354913 2822 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:59:18.356655 kubelet[2822]: I0123 23:59:18.355561 2822 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:59:18.365743 kubelet[2822]: E0123 23:59:18.365718 2822 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:59:18.365850 kubelet[2822]: I0123 23:59:18.365837 2822 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:59:18.368615 kubelet[2822]: I0123 23:59:18.368600 2822 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:59:18.369880 kubelet[2822]: I0123 23:59:18.369850 2822 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:59:18.370108 kubelet[2822]: I0123 23:59:18.369960 2822 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2a642b76b3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:59:18.370237 kubelet[2822]: I0123 23:59:18.370226 2822 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:59:18.370288 kubelet[2822]: I0123 23:59:18.370281 2822 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:59:18.370463 kubelet[2822]: I0123 23:59:18.370451 2822 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:59:18.373150 kubelet[2822]: I0123 23:59:18.373135 2822 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:59:18.373237 kubelet[2822]: I0123 23:59:18.373227 2822 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:59:18.373302 kubelet[2822]: I0123 23:59:18.373294 2822 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:59:18.373364 kubelet[2822]: I0123 23:59:18.373354 2822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:59:18.376127 kubelet[2822]: E0123 23:59:18.376088 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2a642b76b3&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:59:18.376218 kubelet[2822]: I0123 23:59:18.376201 2822 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:59:18.376777 kubelet[2822]: I0123 23:59:18.376755 2822 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:59:18.376830 kubelet[2822]: W0123 23:59:18.376811 2822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:59:18.380270 kubelet[2822]: I0123 23:59:18.380249 2822 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:59:18.380341 kubelet[2822]: I0123 23:59:18.380287 2822 server.go:1289] "Started kubelet" Jan 23 23:59:18.382420 kubelet[2822]: E0123 23:59:18.381339 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:59:18.382420 kubelet[2822]: I0123 23:59:18.381433 2822 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:59:18.382420 kubelet[2822]: I0123 23:59:18.382150 2822 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:59:18.382578 kubelet[2822]: I0123 23:59:18.382511 2822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:59:18.382800 kubelet[2822]: I0123 23:59:18.382772 2822 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:59:18.384136 kubelet[2822]: I0123 23:59:18.384109 2822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:59:18.387050 kubelet[2822]: E0123 23:59:18.385316 2822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2a642b76b3.188d81a14b583a28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2a642b76b3,UID:ci-4081.3.6-n-2a642b76b3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2a642b76b3,},FirstTimestamp:2026-01-23 23:59:18.380263976 +0000 UTC m=+0.907459958,LastTimestamp:2026-01-23 23:59:18.380263976 +0000 UTC m=+0.907459958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2a642b76b3,}" Jan 23 23:59:18.387753 kubelet[2822]: I0123 23:59:18.387735 2822 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:59:18.389383 kubelet[2822]: I0123 23:59:18.389352 2822 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:59:18.389697 kubelet[2822]: E0123 23:59:18.389671 2822 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" Jan 23 23:59:18.391358 kubelet[2822]: I0123 23:59:18.391326 2822 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:59:18.392936 kubelet[2822]: I0123 23:59:18.392008 2822 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:59:18.394677 kubelet[2822]: E0123 23:59:18.394651 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2a642b76b3?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="200ms" Jan 23 23:59:18.396514 kubelet[2822]: E0123 23:59:18.396483 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:59:18.397319 kubelet[2822]: E0123 23:59:18.396785 2822 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:59:18.398219 kubelet[2822]: I0123 23:59:18.398192 2822 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:59:18.398219 kubelet[2822]: I0123 23:59:18.398210 2822 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:59:18.398302 kubelet[2822]: I0123 23:59:18.398272 2822 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:59:18.446334 kubelet[2822]: I0123 23:59:18.446294 2822 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:59:18.447564 kubelet[2822]: I0123 23:59:18.447543 2822 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:59:18.447564 kubelet[2822]: I0123 23:59:18.447565 2822 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:59:18.447667 kubelet[2822]: I0123 23:59:18.447584 2822 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:59:18.447667 kubelet[2822]: I0123 23:59:18.447591 2822 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:59:18.447667 kubelet[2822]: E0123 23:59:18.447632 2822 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:59:18.448272 kubelet[2822]: E0123 23:59:18.448240 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:59:18.455671 kubelet[2822]: I0123 23:59:18.455386 2822 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:59:18.455671 kubelet[2822]: I0123 23:59:18.455465 2822 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:59:18.455671 kubelet[2822]: I0123 23:59:18.455483 2822 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:59:18.461128 kubelet[2822]: I0123 23:59:18.460927 2822 policy_none.go:49] "None policy: Start" Jan 23 23:59:18.461128 kubelet[2822]: I0123 23:59:18.460950 2822 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:59:18.461128 kubelet[2822]: I0123 23:59:18.460959 2822 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:59:18.468498 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:59:18.478824 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:59:18.481940 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:59:18.490733 kubelet[2822]: E0123 23:59:18.490710 2822 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" Jan 23 23:59:18.492161 kubelet[2822]: E0123 23:59:18.492138 2822 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:59:18.492330 kubelet[2822]: I0123 23:59:18.492314 2822 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:59:18.492360 kubelet[2822]: I0123 23:59:18.492331 2822 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:59:18.492622 kubelet[2822]: I0123 23:59:18.492603 2822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:59:18.495686 kubelet[2822]: E0123 23:59:18.495631 2822 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:59:18.495686 kubelet[2822]: E0123 23:59:18.495666 2822 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-2a642b76b3\" not found" Jan 23 23:59:18.560524 systemd[1]: Created slice kubepods-burstable-poda6deadf8e7a50ab62875360d1a6da5cc.slice - libcontainer container kubepods-burstable-poda6deadf8e7a50ab62875360d1a6da5cc.slice. Jan 23 23:59:18.569905 kubelet[2822]: E0123 23:59:18.569625 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.574314 systemd[1]: Created slice kubepods-burstable-pod4c02962fff545eba07b4f49fd7a98922.slice - libcontainer container kubepods-burstable-pod4c02962fff545eba07b4f49fd7a98922.slice. Jan 23 23:59:18.576410 kubelet[2822]: E0123 23:59:18.576316 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.579678 systemd[1]: Created slice kubepods-burstable-pod21a8cebb42e22dd844cb133b32868c60.slice - libcontainer container kubepods-burstable-pod21a8cebb42e22dd844cb133b32868c60.slice. Jan 23 23:59:18.582446 kubelet[2822]: E0123 23:59:18.580949 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.592980 kubelet[2822]: I0123 23:59:18.592832 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6deadf8e7a50ab62875360d1a6da5cc-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" (UID: \"a6deadf8e7a50ab62875360d1a6da5cc\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.592980 kubelet[2822]: I0123 23:59:18.592866 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.592980 kubelet[2822]: I0123 23:59:18.592885 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.592980 kubelet[2822]: I0123 23:59:18.592902 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.592980 kubelet[2822]: I0123 23:59:18.592917 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.593160 kubelet[2822]: I0123 23:59:18.592939 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21a8cebb42e22dd844cb133b32868c60-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2a642b76b3\" (UID: \"21a8cebb42e22dd844cb133b32868c60\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.593160 kubelet[2822]: I0123 23:59:18.592961 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6deadf8e7a50ab62875360d1a6da5cc-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" (UID: \"a6deadf8e7a50ab62875360d1a6da5cc\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.593160 kubelet[2822]: I0123 23:59:18.592975 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6deadf8e7a50ab62875360d1a6da5cc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" (UID: \"a6deadf8e7a50ab62875360d1a6da5cc\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.593160 kubelet[2822]: I0123 23:59:18.592990 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.594204 kubelet[2822]: I0123 23:59:18.594164 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.594530 kubelet[2822]: E0123 23:59:18.594502 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.595741 kubelet[2822]: E0123 23:59:18.595717 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2a642b76b3?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="400ms" Jan 23 23:59:18.796920 kubelet[2822]: I0123 23:59:18.796874 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.797238 kubelet[2822]: E0123 23:59:18.797173 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:18.871321 containerd[1736]: time="2026-01-23T23:59:18.871226209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2a642b76b3,Uid:a6deadf8e7a50ab62875360d1a6da5cc,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:18.878028 containerd[1736]: time="2026-01-23T23:59:18.877819132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2a642b76b3,Uid:4c02962fff545eba07b4f49fd7a98922,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:18.882586 containerd[1736]: time="2026-01-23T23:59:18.882316775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2a642b76b3,Uid:21a8cebb42e22dd844cb133b32868c60,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:18.996288 kubelet[2822]: E0123 23:59:18.996251 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2a642b76b3?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="800ms" Jan 23 23:59:19.198887 kubelet[2822]: I0123 23:59:19.198497 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:19.198887 kubelet[2822]: E0123 23:59:19.198789 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:19.278774 kubelet[2822]: E0123 23:59:19.278744 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2a642b76b3&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:59:19.428876 kubelet[2822]: E0123 23:59:19.428831 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:59:19.493417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320615146.mount: Deactivated successfully. Jan 23 23:59:19.520430 containerd[1736]: time="2026-01-23T23:59:19.520205369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:59:19.522907 containerd[1736]: time="2026-01-23T23:59:19.522871930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:59:19.525856 containerd[1736]: time="2026-01-23T23:59:19.525824292Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:59:19.529105 containerd[1736]: time="2026-01-23T23:59:19.528414453Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:59:19.530733 containerd[1736]: time="2026-01-23T23:59:19.530687014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:59:19.533606 containerd[1736]: time="2026-01-23T23:59:19.533571936Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:59:19.535810 containerd[1736]: time="2026-01-23T23:59:19.535486337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:59:19.544351 containerd[1736]: time="2026-01-23T23:59:19.544280582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:59:19.545292 containerd[1736]: time="2026-01-23T23:59:19.545077462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 662.696127ms" Jan 23 23:59:19.547374 containerd[1736]: time="2026-01-23T23:59:19.547339784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 669.459652ms" Jan 23 23:59:19.548017 containerd[1736]: time="2026-01-23T23:59:19.547988184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 676.687975ms" Jan 23 23:59:19.566757 kubelet[2822]: E0123 23:59:19.566722 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:59:19.797680 kubelet[2822]: E0123 23:59:19.797248 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2a642b76b3?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="1.6s" Jan 23 23:59:19.857784 kubelet[2822]: E0123 23:59:19.857740 2822 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:59:20.001377 kubelet[2822]: I0123 23:59:20.001347 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:20.001678 kubelet[2822]: E0123 23:59:20.001653 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:20.148874 containerd[1736]: time="2026-01-23T23:59:20.148698397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:20.149436 containerd[1736]: time="2026-01-23T23:59:20.148741077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:20.149436 containerd[1736]: time="2026-01-23T23:59:20.149356958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:20.153483 containerd[1736]: time="2026-01-23T23:59:20.150926519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:20.157426 containerd[1736]: time="2026-01-23T23:59:20.156492722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:20.157426 containerd[1736]: time="2026-01-23T23:59:20.156535522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:20.157426 containerd[1736]: time="2026-01-23T23:59:20.156559802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:20.157426 containerd[1736]: time="2026-01-23T23:59:20.156628482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:20.158238 containerd[1736]: time="2026-01-23T23:59:20.158168643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:20.159136 containerd[1736]: time="2026-01-23T23:59:20.158262843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:20.159136 containerd[1736]: time="2026-01-23T23:59:20.158278483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:20.159136 containerd[1736]: time="2026-01-23T23:59:20.158843403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:20.170568 systemd[1]: Started cri-containerd-aa92311bb400dd03f751a45ff3f8f27eee7619cd641b040732ed6630b0ef2af4.scope - libcontainer container aa92311bb400dd03f751a45ff3f8f27eee7619cd641b040732ed6630b0ef2af4. Jan 23 23:59:20.180877 systemd[1]: Started cri-containerd-38aed7db581329025b844ab9248fb0bb029f8d4cd74bcec51eef7402afb83ca1.scope - libcontainer container 38aed7db581329025b844ab9248fb0bb029f8d4cd74bcec51eef7402afb83ca1. Jan 23 23:59:20.194472 systemd[1]: Started cri-containerd-2ff876a6be18091d829605781dff3ba297d915a7441ee56a606ca6cba4bcbcff.scope - libcontainer container 2ff876a6be18091d829605781dff3ba297d915a7441ee56a606ca6cba4bcbcff. Jan 23 23:59:20.229091 containerd[1736]: time="2026-01-23T23:59:20.228874162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2a642b76b3,Uid:21a8cebb42e22dd844cb133b32868c60,Namespace:kube-system,Attempt:0,} returns sandbox id \"38aed7db581329025b844ab9248fb0bb029f8d4cd74bcec51eef7402afb83ca1\"" Jan 23 23:59:20.233916 containerd[1736]: time="2026-01-23T23:59:20.233880765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2a642b76b3,Uid:4c02962fff545eba07b4f49fd7a98922,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa92311bb400dd03f751a45ff3f8f27eee7619cd641b040732ed6630b0ef2af4\"" Jan 23 23:59:20.244091 containerd[1736]: time="2026-01-23T23:59:20.244007810Z" level=info msg="CreateContainer within sandbox \"38aed7db581329025b844ab9248fb0bb029f8d4cd74bcec51eef7402afb83ca1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:59:20.249433 containerd[1736]: time="2026-01-23T23:59:20.249297413Z" level=info msg="CreateContainer within sandbox \"aa92311bb400dd03f751a45ff3f8f27eee7619cd641b040732ed6630b0ef2af4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:59:20.250044 containerd[1736]: time="2026-01-23T23:59:20.249822533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2a642b76b3,Uid:a6deadf8e7a50ab62875360d1a6da5cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ff876a6be18091d829605781dff3ba297d915a7441ee56a606ca6cba4bcbcff\"" Jan 23 23:59:20.258289 containerd[1736]: time="2026-01-23T23:59:20.258166538Z" level=info msg="CreateContainer within sandbox \"2ff876a6be18091d829605781dff3ba297d915a7441ee56a606ca6cba4bcbcff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:59:20.315258 containerd[1736]: time="2026-01-23T23:59:20.315218250Z" level=info msg="CreateContainer within sandbox \"38aed7db581329025b844ab9248fb0bb029f8d4cd74bcec51eef7402afb83ca1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"826b25be8139373262e21dad247af98afe3a973751829e60960b9e450dbdcc5f\"" Jan 23 23:59:20.316156 containerd[1736]: time="2026-01-23T23:59:20.316128450Z" level=info msg="StartContainer for \"826b25be8139373262e21dad247af98afe3a973751829e60960b9e450dbdcc5f\"" Jan 23 23:59:20.320082 containerd[1736]: time="2026-01-23T23:59:20.320052532Z" level=info msg="CreateContainer within sandbox \"aa92311bb400dd03f751a45ff3f8f27eee7619cd641b040732ed6630b0ef2af4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f2bed6252ded4f367d10cfc74258a2f5d3482549711141dc21535d0263cc697\"" Jan 23 23:59:20.320537 containerd[1736]: time="2026-01-23T23:59:20.320466613Z" level=info msg="StartContainer for \"6f2bed6252ded4f367d10cfc74258a2f5d3482549711141dc21535d0263cc697\"" Jan 23 23:59:20.325319 containerd[1736]: time="2026-01-23T23:59:20.325284655Z" level=info msg="CreateContainer within sandbox \"2ff876a6be18091d829605781dff3ba297d915a7441ee56a606ca6cba4bcbcff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b13d7b0f9012eeba83e1ce5c52344121c1c26816e1033c12299272c7877bb8f\"" Jan 23 23:59:20.326005 containerd[1736]: time="2026-01-23T23:59:20.325922136Z" level=info msg="StartContainer for \"2b13d7b0f9012eeba83e1ce5c52344121c1c26816e1033c12299272c7877bb8f\"" Jan 23 23:59:20.345555 systemd[1]: Started cri-containerd-826b25be8139373262e21dad247af98afe3a973751829e60960b9e450dbdcc5f.scope - libcontainer container 826b25be8139373262e21dad247af98afe3a973751829e60960b9e450dbdcc5f. Jan 23 23:59:20.356581 systemd[1]: Started cri-containerd-6f2bed6252ded4f367d10cfc74258a2f5d3482549711141dc21535d0263cc697.scope - libcontainer container 6f2bed6252ded4f367d10cfc74258a2f5d3482549711141dc21535d0263cc697. Jan 23 23:59:20.366553 systemd[1]: Started cri-containerd-2b13d7b0f9012eeba83e1ce5c52344121c1c26816e1033c12299272c7877bb8f.scope - libcontainer container 2b13d7b0f9012eeba83e1ce5c52344121c1c26816e1033c12299272c7877bb8f. Jan 23 23:59:20.406626 containerd[1736]: time="2026-01-23T23:59:20.405990380Z" level=info msg="StartContainer for \"826b25be8139373262e21dad247af98afe3a973751829e60960b9e450dbdcc5f\" returns successfully" Jan 23 23:59:20.415333 containerd[1736]: time="2026-01-23T23:59:20.415273585Z" level=info msg="StartContainer for \"6f2bed6252ded4f367d10cfc74258a2f5d3482549711141dc21535d0263cc697\" returns successfully" Jan 23 23:59:20.420448 containerd[1736]: time="2026-01-23T23:59:20.420408148Z" level=info msg="StartContainer for \"2b13d7b0f9012eeba83e1ce5c52344121c1c26816e1033c12299272c7877bb8f\" returns successfully" Jan 23 23:59:20.458409 kubelet[2822]: E0123 23:59:20.457983 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:20.461179 kubelet[2822]: E0123 23:59:20.461162 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:20.463738 kubelet[2822]: E0123 23:59:20.463603 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:21.467174 kubelet[2822]: E0123 23:59:21.466980 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:21.467174 kubelet[2822]: E0123 23:59:21.467064 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:21.605418 kubelet[2822]: I0123 23:59:21.603731 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.467772 kubelet[2822]: E0123 23:59:22.467238 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.467772 kubelet[2822]: E0123 23:59:22.467607 2822 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.837456 kubelet[2822]: E0123 23:59:22.837417 2822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-2a642b76b3\" not found" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.876052 kubelet[2822]: I0123 23:59:22.874850 2822 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.876052 kubelet[2822]: E0123 23:59:22.874888 2822 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-2a642b76b3\": node \"ci-4081.3.6-n-2a642b76b3\" not found" Jan 23 23:59:22.891660 kubelet[2822]: I0123 23:59:22.891629 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.921030 kubelet[2822]: E0123 23:59:22.920959 2822 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.921030 kubelet[2822]: I0123 23:59:22.920984 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.923188 kubelet[2822]: E0123 23:59:22.923028 2822 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.923188 kubelet[2822]: I0123 23:59:22.923052 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:22.924519 kubelet[2822]: E0123 23:59:22.924500 2822 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2a642b76b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:23.383179 kubelet[2822]: I0123 23:59:23.382886 2822 apiserver.go:52] "Watching apiserver" Jan 23 23:59:23.392161 kubelet[2822]: I0123 23:59:23.392129 2822 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:59:23.786549 waagent[1917]: 2026-01-23T23:59:23.786385Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 23:59:23.795529 waagent[1917]: 2026-01-23T23:59:23.795482Z INFO ExtHandler Jan 23 23:59:23.795632 waagent[1917]: 2026-01-23T23:59:23.795600Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 09d02d69-6b83-40c7-9413-2b063dc5998b eTag: 10669237229975621018 source: Fabric] Jan 23 23:59:23.796059 waagent[1917]: 2026-01-23T23:59:23.796014Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:59:23.796732 waagent[1917]: 2026-01-23T23:59:23.796686Z INFO ExtHandler Jan 23 23:59:23.796807 waagent[1917]: 2026-01-23T23:59:23.796776Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 23:59:23.857003 waagent[1917]: 2026-01-23T23:59:23.856961Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:59:23.923144 waagent[1917]: 2026-01-23T23:59:23.923066Z INFO ExtHandler Downloaded certificate {'thumbprint': '0F7584AD90050436922E9E1CCE76AC317F443CF2', 'hasPrivateKey': True} Jan 23 23:59:23.923634 waagent[1917]: 2026-01-23T23:59:23.923592Z INFO ExtHandler Fetch goal state completed Jan 23 23:59:23.923987 waagent[1917]: 2026-01-23T23:59:23.923949Z INFO ExtHandler ExtHandler Jan 23 23:59:23.924051 waagent[1917]: 2026-01-23T23:59:23.924024Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: f868ac0a-2b2b-4e3b-ab36-3ed2a5697600 correlation ee4d4028-1063-4040-9988-23590b4ea227 created: 2026-01-23T23:59:16.119728Z] Jan 23 23:59:23.924341 waagent[1917]: 2026-01-23T23:59:23.924305Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:59:23.924882 waagent[1917]: 2026-01-23T23:59:23.924846Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 23:59:25.014984 systemd[1]: Reloading requested from client PID 3114 ('systemctl') (unit session-9.scope)... Jan 23 23:59:25.015279 systemd[1]: Reloading... Jan 23 23:59:25.090462 zram_generator::config[3154]: No configuration found. Jan 23 23:59:25.191299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:59:25.279846 systemd[1]: Reloading finished in 264 ms. Jan 23 23:59:25.316266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:25.327947 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:59:25.328121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:25.328159 systemd[1]: kubelet.service: Consumed 1.244s CPU time, 131.7M memory peak, 0B memory swap peak. Jan 23 23:59:25.334602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:59:25.830628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:59:25.837462 (kubelet)[3217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:59:25.872271 kubelet[3217]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:59:25.872271 kubelet[3217]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:59:25.872271 kubelet[3217]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:59:25.872631 kubelet[3217]: I0123 23:59:25.872320 3217 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:59:25.880184 kubelet[3217]: I0123 23:59:25.880154 3217 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:59:25.880184 kubelet[3217]: I0123 23:59:25.880180 3217 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:59:25.880397 kubelet[3217]: I0123 23:59:25.880378 3217 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:59:25.881607 kubelet[3217]: I0123 23:59:25.881587 3217 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:59:25.883824 kubelet[3217]: I0123 23:59:25.883802 3217 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:59:25.887694 kubelet[3217]: E0123 23:59:25.887662 3217 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:59:25.887694 kubelet[3217]: I0123 23:59:25.887694 3217 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:59:25.890595 kubelet[3217]: I0123 23:59:25.890575 3217 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:59:25.890806 kubelet[3217]: I0123 23:59:25.890777 3217 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:59:25.890939 kubelet[3217]: I0123 23:59:25.890804 3217 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2a642b76b3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:59:25.891013 kubelet[3217]: I0123 23:59:25.890943 3217 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:59:25.891013 kubelet[3217]: I0123 23:59:25.890951 3217 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:59:25.891013 kubelet[3217]: I0123 23:59:25.890990 3217 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:59:25.891554 kubelet[3217]: I0123 23:59:25.891434 3217 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:59:25.891554 kubelet[3217]: I0123 23:59:25.891493 3217 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:59:25.891554 kubelet[3217]: I0123 23:59:25.891526 3217 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:59:25.891554 kubelet[3217]: I0123 23:59:25.891540 3217 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:59:25.896029 kubelet[3217]: I0123 23:59:25.894729 3217 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:59:25.896029 kubelet[3217]: I0123 23:59:25.895491 3217 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:59:25.899418 kubelet[3217]: I0123 23:59:25.898473 3217 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:59:25.899418 kubelet[3217]: I0123 23:59:25.898510 3217 server.go:1289] "Started kubelet" Jan 23 23:59:25.905430 kubelet[3217]: I0123 23:59:25.903600 3217 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:59:25.906787 kubelet[3217]: I0123 23:59:25.906733 3217 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:59:25.907416 kubelet[3217]: I0123 23:59:25.907087 3217 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:59:25.909405 kubelet[3217]: I0123 23:59:25.908366 3217 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:59:25.910403 kubelet[3217]: I0123 23:59:25.909862 3217 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:59:25.912282 kubelet[3217]: I0123 23:59:25.911090 3217 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:59:25.915699 kubelet[3217]: I0123 23:59:25.911253 3217 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:59:25.915857 kubelet[3217]: I0123 23:59:25.911266 3217 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:59:25.915911 kubelet[3217]: E0123 23:59:25.911374 3217 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2a642b76b3\" not found" Jan 23 23:59:25.916066 kubelet[3217]: I0123 23:59:25.916054 3217 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:59:25.918873 kubelet[3217]: I0123 23:59:25.918852 3217 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:59:25.919038 kubelet[3217]: I0123 23:59:25.919019 3217 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:59:25.919218 kubelet[3217]: I0123 23:59:25.919182 3217 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:59:25.922003 kubelet[3217]: I0123 23:59:25.920662 3217 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:59:25.922003 kubelet[3217]: I0123 23:59:25.920687 3217 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:59:25.922003 kubelet[3217]: I0123 23:59:25.920703 3217 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:59:25.922003 kubelet[3217]: I0123 23:59:25.920709 3217 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:59:25.922003 kubelet[3217]: E0123 23:59:25.920747 3217 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:59:25.922003 kubelet[3217]: I0123 23:59:25.920844 3217 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:59:25.927463 kubelet[3217]: E0123 23:59:25.926735 3217 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002685 3217 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002703 3217 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002723 3217 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002855 3217 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002866 3217 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002887 3217 policy_none.go:49] "None policy: Start" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002896 3217 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002904 3217 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:59:26.003081 kubelet[3217]: I0123 23:59:26.002985 3217 state_mem.go:75] "Updated machine memory state" Jan 23 23:59:26.007373 kubelet[3217]: E0123 23:59:26.007346 3217 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:59:26.008061 kubelet[3217]: I0123 23:59:26.007510 3217 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:59:26.008061 kubelet[3217]: I0123 23:59:26.007521 3217 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:59:26.008061 kubelet[3217]: I0123 23:59:26.007796 3217 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:59:26.010129 kubelet[3217]: E0123 23:59:26.010103 3217 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:59:26.022755 kubelet[3217]: I0123 23:59:26.022006 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.022755 kubelet[3217]: I0123 23:59:26.022371 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.023730 kubelet[3217]: I0123 23:59:26.023621 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.035284 kubelet[3217]: I0123 23:59:26.034583 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:59:26.039286 kubelet[3217]: I0123 23:59:26.039269 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:59:26.040511 kubelet[3217]: I0123 23:59:26.040495 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:59:26.110742 kubelet[3217]: I0123 23:59:26.110718 3217 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117492 kubelet[3217]: I0123 23:59:26.117473 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117620 kubelet[3217]: I0123 23:59:26.117607 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117712 kubelet[3217]: I0123 23:59:26.117702 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6deadf8e7a50ab62875360d1a6da5cc-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" (UID: \"a6deadf8e7a50ab62875360d1a6da5cc\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117807 kubelet[3217]: I0123 23:59:26.117794 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6deadf8e7a50ab62875360d1a6da5cc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" (UID: \"a6deadf8e7a50ab62875360d1a6da5cc\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117955 kubelet[3217]: I0123 23:59:26.117877 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117955 kubelet[3217]: I0123 23:59:26.117897 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21a8cebb42e22dd844cb133b32868c60-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2a642b76b3\" (UID: \"21a8cebb42e22dd844cb133b32868c60\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.117955 kubelet[3217]: I0123 23:59:26.117917 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6deadf8e7a50ab62875360d1a6da5cc-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" (UID: \"a6deadf8e7a50ab62875360d1a6da5cc\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.118123 kubelet[3217]: I0123 23:59:26.118065 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.118123 kubelet[3217]: I0123 23:59:26.118084 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c02962fff545eba07b4f49fd7a98922-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2a642b76b3\" (UID: \"4c02962fff545eba07b4f49fd7a98922\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.129492 kubelet[3217]: I0123 23:59:26.129449 3217 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.129713 kubelet[3217]: I0123 23:59:26.129608 3217 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.893767 kubelet[3217]: I0123 23:59:26.893736 3217 apiserver.go:52] "Watching apiserver" Jan 23 23:59:26.916677 kubelet[3217]: I0123 23:59:26.916631 3217 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:59:26.976512 kubelet[3217]: I0123 23:59:26.976069 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:26.985837 kubelet[3217]: I0123 23:59:26.985813 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:59:26.985936 kubelet[3217]: E0123 23:59:26.985857 3217 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2a642b76b3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" Jan 23 23:59:27.011840 kubelet[3217]: I0123 23:59:27.011626 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2a642b76b3" podStartSLOduration=1.011613651 podStartE2EDuration="1.011613651s" podCreationTimestamp="2026-01-23 23:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:26.999847121 +0000 UTC m=+1.159265247" watchObservedRunningTime="2026-01-23 23:59:27.011613651 +0000 UTC m=+1.171031777" Jan 23 23:59:27.012481 kubelet[3217]: I0123 23:59:27.012350 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2a642b76b3" podStartSLOduration=1.012335612 podStartE2EDuration="1.012335612s" podCreationTimestamp="2026-01-23 23:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:27.011600011 +0000 UTC m=+1.171018097" watchObservedRunningTime="2026-01-23 23:59:27.012335612 +0000 UTC m=+1.171753738" Jan 23 23:59:27.043121 kubelet[3217]: I0123 23:59:27.043072 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2a642b76b3" podStartSLOduration=1.043059999 podStartE2EDuration="1.043059999s" podCreationTimestamp="2026-01-23 23:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:27.027813985 +0000 UTC m=+1.187232111" watchObservedRunningTime="2026-01-23 23:59:27.043059999 +0000 UTC m=+1.202478125" Jan 23 23:59:30.355958 kubelet[3217]: I0123 23:59:30.355815 3217 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:59:30.356311 containerd[1736]: time="2026-01-23T23:59:30.356120219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:59:30.356571 kubelet[3217]: I0123 23:59:30.356372 3217 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:59:31.501171 systemd[1]: Created slice kubepods-besteffort-pod11a5175c_ba9f_4054_86e0_4ca67691ff06.slice - libcontainer container kubepods-besteffort-pod11a5175c_ba9f_4054_86e0_4ca67691ff06.slice. Jan 23 23:59:31.547743 kubelet[3217]: I0123 23:59:31.547232 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11a5175c-ba9f-4054-86e0-4ca67691ff06-kube-proxy\") pod \"kube-proxy-qptg6\" (UID: \"11a5175c-ba9f-4054-86e0-4ca67691ff06\") " pod="kube-system/kube-proxy-qptg6" Jan 23 23:59:31.547743 kubelet[3217]: I0123 23:59:31.547271 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11a5175c-ba9f-4054-86e0-4ca67691ff06-lib-modules\") pod \"kube-proxy-qptg6\" (UID: \"11a5175c-ba9f-4054-86e0-4ca67691ff06\") " pod="kube-system/kube-proxy-qptg6" Jan 23 23:59:31.547743 kubelet[3217]: I0123 23:59:31.547289 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11a5175c-ba9f-4054-86e0-4ca67691ff06-xtables-lock\") pod \"kube-proxy-qptg6\" (UID: \"11a5175c-ba9f-4054-86e0-4ca67691ff06\") " pod="kube-system/kube-proxy-qptg6" Jan 23 23:59:31.547743 kubelet[3217]: I0123 23:59:31.547303 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8l5h\" (UniqueName: \"kubernetes.io/projected/11a5175c-ba9f-4054-86e0-4ca67691ff06-kube-api-access-x8l5h\") pod \"kube-proxy-qptg6\" (UID: \"11a5175c-ba9f-4054-86e0-4ca67691ff06\") " pod="kube-system/kube-proxy-qptg6" Jan 23 23:59:31.588072 systemd[1]: Created slice kubepods-besteffort-podca712828_b993_4a23_b4da_02e77c4fe37c.slice - libcontainer container kubepods-besteffort-podca712828_b993_4a23_b4da_02e77c4fe37c.slice. Jan 23 23:59:31.647992 kubelet[3217]: I0123 23:59:31.647957 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm59k\" (UniqueName: \"kubernetes.io/projected/ca712828-b993-4a23-b4da-02e77c4fe37c-kube-api-access-nm59k\") pod \"tigera-operator-7dcd859c48-4k5lt\" (UID: \"ca712828-b993-4a23-b4da-02e77c4fe37c\") " pod="tigera-operator/tigera-operator-7dcd859c48-4k5lt" Jan 23 23:59:31.648177 kubelet[3217]: I0123 23:59:31.648163 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca712828-b993-4a23-b4da-02e77c4fe37c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4k5lt\" (UID: \"ca712828-b993-4a23-b4da-02e77c4fe37c\") " pod="tigera-operator/tigera-operator-7dcd859c48-4k5lt" Jan 23 23:59:31.810633 containerd[1736]: time="2026-01-23T23:59:31.810538035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qptg6,Uid:11a5175c-ba9f-4054-86e0-4ca67691ff06,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:31.849148 containerd[1736]: time="2026-01-23T23:59:31.848824388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:31.849148 containerd[1736]: time="2026-01-23T23:59:31.848908628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:31.849148 containerd[1736]: time="2026-01-23T23:59:31.848934668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:31.849148 containerd[1736]: time="2026-01-23T23:59:31.849026268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:31.866560 systemd[1]: Started cri-containerd-19a98f3866a53caafb15386729e6bb08555e804946aa44e1c0b300ba7eb01956.scope - libcontainer container 19a98f3866a53caafb15386729e6bb08555e804946aa44e1c0b300ba7eb01956. Jan 23 23:59:31.889822 containerd[1736]: time="2026-01-23T23:59:31.889786464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qptg6,Uid:11a5175c-ba9f-4054-86e0-4ca67691ff06,Namespace:kube-system,Attempt:0,} returns sandbox id \"19a98f3866a53caafb15386729e6bb08555e804946aa44e1c0b300ba7eb01956\"" Jan 23 23:59:31.892266 containerd[1736]: time="2026-01-23T23:59:31.892026106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4k5lt,Uid:ca712828-b993-4a23-b4da-02e77c4fe37c,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:59:31.901523 containerd[1736]: time="2026-01-23T23:59:31.901491794Z" level=info msg="CreateContainer within sandbox \"19a98f3866a53caafb15386729e6bb08555e804946aa44e1c0b300ba7eb01956\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:59:31.939835 containerd[1736]: time="2026-01-23T23:59:31.935644543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:31.939835 containerd[1736]: time="2026-01-23T23:59:31.935686943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:31.939835 containerd[1736]: time="2026-01-23T23:59:31.935697143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:31.939835 containerd[1736]: time="2026-01-23T23:59:31.935765823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:31.948604 containerd[1736]: time="2026-01-23T23:59:31.948553274Z" level=info msg="CreateContainer within sandbox \"19a98f3866a53caafb15386729e6bb08555e804946aa44e1c0b300ba7eb01956\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d6e61fc13969508dda475c74d37bc0529f27ed874935260eb8d60777c9b15cf\"" Jan 23 23:59:31.949634 containerd[1736]: time="2026-01-23T23:59:31.949526955Z" level=info msg="StartContainer for \"5d6e61fc13969508dda475c74d37bc0529f27ed874935260eb8d60777c9b15cf\"" Jan 23 23:59:31.958532 systemd[1]: Started cri-containerd-5164a537d6771cf206b6b3e739b59f5a8f7c108a564866bd48641000c2efe097.scope - libcontainer container 5164a537d6771cf206b6b3e739b59f5a8f7c108a564866bd48641000c2efe097. Jan 23 23:59:31.981120 systemd[1]: Started cri-containerd-5d6e61fc13969508dda475c74d37bc0529f27ed874935260eb8d60777c9b15cf.scope - libcontainer container 5d6e61fc13969508dda475c74d37bc0529f27ed874935260eb8d60777c9b15cf. Jan 23 23:59:32.007464 containerd[1736]: time="2026-01-23T23:59:32.007185965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4k5lt,Uid:ca712828-b993-4a23-b4da-02e77c4fe37c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5164a537d6771cf206b6b3e739b59f5a8f7c108a564866bd48641000c2efe097\"" Jan 23 23:59:32.009208 containerd[1736]: time="2026-01-23T23:59:32.009023287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:59:32.018259 containerd[1736]: time="2026-01-23T23:59:32.018231655Z" level=info msg="StartContainer for \"5d6e61fc13969508dda475c74d37bc0529f27ed874935260eb8d60777c9b15cf\" returns successfully" Jan 23 23:59:33.011193 kubelet[3217]: I0123 23:59:33.010865 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qptg6" podStartSLOduration=2.010849632 podStartE2EDuration="2.010849632s" podCreationTimestamp="2026-01-23 23:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:33.010696792 +0000 UTC m=+7.170114918" watchObservedRunningTime="2026-01-23 23:59:33.010849632 +0000 UTC m=+7.170267758" Jan 23 23:59:33.615439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451727387.mount: Deactivated successfully. Jan 23 23:59:34.015441 containerd[1736]: time="2026-01-23T23:59:34.014715699Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:34.017624 containerd[1736]: time="2026-01-23T23:59:34.017420781Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:59:34.020607 containerd[1736]: time="2026-01-23T23:59:34.020575264Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:34.024819 containerd[1736]: time="2026-01-23T23:59:34.024767427Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:34.025664 containerd[1736]: time="2026-01-23T23:59:34.025493788Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.016435981s" Jan 23 23:59:34.025664 containerd[1736]: time="2026-01-23T23:59:34.025523868Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:59:34.033332 containerd[1736]: time="2026-01-23T23:59:34.033301755Z" level=info msg="CreateContainer within sandbox \"5164a537d6771cf206b6b3e739b59f5a8f7c108a564866bd48641000c2efe097\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:59:34.060656 containerd[1736]: time="2026-01-23T23:59:34.060508855Z" level=info msg="CreateContainer within sandbox \"5164a537d6771cf206b6b3e739b59f5a8f7c108a564866bd48641000c2efe097\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6e40fa3c12d82bc8d5fd94dd6a9424d2dcfc7227bfe37239b62231094b62d924\"" Jan 23 23:59:34.061032 containerd[1736]: time="2026-01-23T23:59:34.060848135Z" level=info msg="StartContainer for \"6e40fa3c12d82bc8d5fd94dd6a9424d2dcfc7227bfe37239b62231094b62d924\"" Jan 23 23:59:34.086599 systemd[1]: Started cri-containerd-6e40fa3c12d82bc8d5fd94dd6a9424d2dcfc7227bfe37239b62231094b62d924.scope - libcontainer container 6e40fa3c12d82bc8d5fd94dd6a9424d2dcfc7227bfe37239b62231094b62d924. Jan 23 23:59:34.110687 containerd[1736]: time="2026-01-23T23:59:34.110595480Z" level=info msg="StartContainer for \"6e40fa3c12d82bc8d5fd94dd6a9424d2dcfc7227bfe37239b62231094b62d924\" returns successfully" Jan 23 23:59:36.617506 kubelet[3217]: I0123 23:59:36.617443 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4k5lt" podStartSLOduration=3.5991517589999997 podStartE2EDuration="5.617385182s" podCreationTimestamp="2026-01-23 23:59:31 +0000 UTC" firstStartedPulling="2026-01-23 23:59:32.008446606 +0000 UTC m=+6.167864732" lastFinishedPulling="2026-01-23 23:59:34.026680069 +0000 UTC m=+8.186098155" observedRunningTime="2026-01-23 23:59:35.017219242 +0000 UTC m=+9.176637368" watchObservedRunningTime="2026-01-23 23:59:36.617385182 +0000 UTC m=+10.776803308" Jan 23 23:59:39.994599 sudo[2239]: pam_unix(sudo:session): session closed for user root Jan 23 23:59:40.073816 sshd[2236]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:40.076866 systemd[1]: sshd@6-10.200.20.20:22-10.200.16.10:42368.service: Deactivated successfully. Jan 23 23:59:40.078306 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:59:40.080038 systemd[1]: session-9.scope: Consumed 6.799s CPU time, 153.9M memory peak, 0B memory swap peak. Jan 23 23:59:40.082885 systemd-logind[1714]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:59:40.083785 systemd-logind[1714]: Removed session 9. Jan 23 23:59:52.212625 systemd[1]: Created slice kubepods-besteffort-podb3b92bf5_d6c9_4b68_be60_635f76ccd48a.slice - libcontainer container kubepods-besteffort-podb3b92bf5_d6c9_4b68_be60_635f76ccd48a.slice. Jan 23 23:59:52.271283 kubelet[3217]: I0123 23:59:52.271150 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3b92bf5-d6c9-4b68-be60-635f76ccd48a-tigera-ca-bundle\") pod \"calico-typha-c5c564976-57psl\" (UID: \"b3b92bf5-d6c9-4b68-be60-635f76ccd48a\") " pod="calico-system/calico-typha-c5c564976-57psl" Jan 23 23:59:52.271283 kubelet[3217]: I0123 23:59:52.271193 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3b92bf5-d6c9-4b68-be60-635f76ccd48a-typha-certs\") pod \"calico-typha-c5c564976-57psl\" (UID: \"b3b92bf5-d6c9-4b68-be60-635f76ccd48a\") " pod="calico-system/calico-typha-c5c564976-57psl" Jan 23 23:59:52.271283 kubelet[3217]: I0123 23:59:52.271217 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9hd\" (UniqueName: \"kubernetes.io/projected/b3b92bf5-d6c9-4b68-be60-635f76ccd48a-kube-api-access-hj9hd\") pod \"calico-typha-c5c564976-57psl\" (UID: \"b3b92bf5-d6c9-4b68-be60-635f76ccd48a\") " pod="calico-system/calico-typha-c5c564976-57psl" Jan 23 23:59:52.456536 systemd[1]: Created slice kubepods-besteffort-podd67edd48_3871_4239_879d_04e2efb4e496.slice - libcontainer container kubepods-besteffort-podd67edd48_3871_4239_879d_04e2efb4e496.slice. Jan 23 23:59:52.473340 kubelet[3217]: I0123 23:59:52.472934 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d67edd48-3871-4239-879d-04e2efb4e496-tigera-ca-bundle\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473340 kubelet[3217]: I0123 23:59:52.473044 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz2h\" (UniqueName: \"kubernetes.io/projected/d67edd48-3871-4239-879d-04e2efb4e496-kube-api-access-8fz2h\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473340 kubelet[3217]: I0123 23:59:52.473081 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-lib-modules\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473340 kubelet[3217]: I0123 23:59:52.473099 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-policysync\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473340 kubelet[3217]: I0123 23:59:52.473119 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-cni-bin-dir\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473557 kubelet[3217]: I0123 23:59:52.473134 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-flexvol-driver-host\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473557 kubelet[3217]: I0123 23:59:52.473150 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-var-lib-calico\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473557 kubelet[3217]: I0123 23:59:52.473164 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-var-run-calico\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473557 kubelet[3217]: I0123 23:59:52.473181 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-cni-net-dir\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473557 kubelet[3217]: I0123 23:59:52.473196 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d67edd48-3871-4239-879d-04e2efb4e496-node-certs\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473665 kubelet[3217]: I0123 23:59:52.473225 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-xtables-lock\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.473665 kubelet[3217]: I0123 23:59:52.473241 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d67edd48-3871-4239-879d-04e2efb4e496-cni-log-dir\") pod \"calico-node-l82sn\" (UID: \"d67edd48-3871-4239-879d-04e2efb4e496\") " pod="calico-system/calico-node-l82sn" Jan 23 23:59:52.517642 containerd[1736]: time="2026-01-23T23:59:52.517600454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c5c564976-57psl,Uid:b3b92bf5-d6c9-4b68-be60-635f76ccd48a,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:52.555197 containerd[1736]: time="2026-01-23T23:59:52.555118514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:52.555497 containerd[1736]: time="2026-01-23T23:59:52.555343075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:52.555497 containerd[1736]: time="2026-01-23T23:59:52.555381955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:52.555644 containerd[1736]: time="2026-01-23T23:59:52.555613115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:52.576291 kubelet[3217]: E0123 23:59:52.576194 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.576291 kubelet[3217]: W0123 23:59:52.576223 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.576291 kubelet[3217]: E0123 23:59:52.576252 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.577564 systemd[1]: Started cri-containerd-7a15dbd6b22e54aedb3ade4dbedc4834797f65bd0e15596afebe24b5b881de5a.scope - libcontainer container 7a15dbd6b22e54aedb3ade4dbedc4834797f65bd0e15596afebe24b5b881de5a. Jan 23 23:59:52.580954 kubelet[3217]: E0123 23:59:52.580938 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.581084 kubelet[3217]: W0123 23:59:52.581071 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.581180 kubelet[3217]: E0123 23:59:52.581154 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.590805 kubelet[3217]: E0123 23:59:52.590767 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.590805 kubelet[3217]: W0123 23:59:52.590799 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.591042 kubelet[3217]: E0123 23:59:52.590816 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.620603 containerd[1736]: time="2026-01-23T23:59:52.620561269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c5c564976-57psl,Uid:b3b92bf5-d6c9-4b68-be60-635f76ccd48a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a15dbd6b22e54aedb3ade4dbedc4834797f65bd0e15596afebe24b5b881de5a\"" Jan 23 23:59:52.622806 containerd[1736]: time="2026-01-23T23:59:52.622781350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:59:52.668946 kubelet[3217]: E0123 23:59:52.668662 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 23 23:59:52.758918 kubelet[3217]: E0123 23:59:52.757959 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.758918 kubelet[3217]: W0123 23:59:52.757983 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.758918 kubelet[3217]: E0123 23:59:52.758002 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.759283 kubelet[3217]: E0123 23:59:52.759173 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.759283 kubelet[3217]: W0123 23:59:52.759193 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.759283 kubelet[3217]: E0123 23:59:52.759237 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.759642 kubelet[3217]: E0123 23:59:52.759601 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.759642 kubelet[3217]: W0123 23:59:52.759613 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.759642 kubelet[3217]: E0123 23:59:52.759624 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.760016 kubelet[3217]: E0123 23:59:52.759934 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.760016 kubelet[3217]: W0123 23:59:52.759945 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.760016 kubelet[3217]: E0123 23:59:52.759957 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.760264 kubelet[3217]: E0123 23:59:52.760253 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.760445 kubelet[3217]: W0123 23:59:52.760290 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.760445 kubelet[3217]: E0123 23:59:52.760305 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.760560 kubelet[3217]: E0123 23:59:52.760550 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.760674 kubelet[3217]: W0123 23:59:52.760594 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.760674 kubelet[3217]: E0123 23:59:52.760607 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.760966 kubelet[3217]: E0123 23:59:52.760900 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.760966 kubelet[3217]: W0123 23:59:52.760913 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.760966 kubelet[3217]: E0123 23:59:52.760922 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.761321 kubelet[3217]: E0123 23:59:52.761209 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.761321 kubelet[3217]: W0123 23:59:52.761225 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.761321 kubelet[3217]: E0123 23:59:52.761236 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.761529 kubelet[3217]: E0123 23:59:52.761491 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.761529 kubelet[3217]: W0123 23:59:52.761502 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.761529 kubelet[3217]: E0123 23:59:52.761512 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.761879 kubelet[3217]: E0123 23:59:52.761779 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.761879 kubelet[3217]: W0123 23:59:52.761790 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.761879 kubelet[3217]: E0123 23:59:52.761800 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.762142 kubelet[3217]: E0123 23:59:52.762034 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.762142 kubelet[3217]: W0123 23:59:52.762044 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.762142 kubelet[3217]: E0123 23:59:52.762054 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.762710 kubelet[3217]: E0123 23:59:52.762282 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.762710 kubelet[3217]: W0123 23:59:52.762292 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.762710 kubelet[3217]: E0123 23:59:52.762302 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.762710 kubelet[3217]: E0123 23:59:52.762570 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.762710 kubelet[3217]: W0123 23:59:52.762581 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.762710 kubelet[3217]: E0123 23:59:52.762594 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.762914 containerd[1736]: time="2026-01-23T23:59:52.762858345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l82sn,Uid:d67edd48-3871-4239-879d-04e2efb4e496,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:52.763277 kubelet[3217]: E0123 23:59:52.763254 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.763277 kubelet[3217]: W0123 23:59:52.763274 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.763356 kubelet[3217]: E0123 23:59:52.763292 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.763625 kubelet[3217]: E0123 23:59:52.763608 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.763625 kubelet[3217]: W0123 23:59:52.763621 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.763731 kubelet[3217]: E0123 23:59:52.763633 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.763816 kubelet[3217]: E0123 23:59:52.763803 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.763816 kubelet[3217]: W0123 23:59:52.763815 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.763866 kubelet[3217]: E0123 23:59:52.763825 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.764034 kubelet[3217]: E0123 23:59:52.764022 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.764034 kubelet[3217]: W0123 23:59:52.764032 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.764119 kubelet[3217]: E0123 23:59:52.764042 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.764222 kubelet[3217]: E0123 23:59:52.764210 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.764252 kubelet[3217]: W0123 23:59:52.764230 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.764252 kubelet[3217]: E0123 23:59:52.764240 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.764453 kubelet[3217]: E0123 23:59:52.764389 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.764453 kubelet[3217]: W0123 23:59:52.764453 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.764518 kubelet[3217]: E0123 23:59:52.764464 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.764663 kubelet[3217]: E0123 23:59:52.764651 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.764741 kubelet[3217]: W0123 23:59:52.764723 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.764770 kubelet[3217]: E0123 23:59:52.764744 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.776105 kubelet[3217]: E0123 23:59:52.775983 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.776105 kubelet[3217]: W0123 23:59:52.775999 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.776105 kubelet[3217]: E0123 23:59:52.776010 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.776105 kubelet[3217]: I0123 23:59:52.776030 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1900a277-348f-4eb2-aa7c-7d2406a64ec8-registration-dir\") pod \"csi-node-driver-mmrrm\" (UID: \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\") " pod="calico-system/csi-node-driver-mmrrm" Jan 23 23:59:52.776319 kubelet[3217]: E0123 23:59:52.776305 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.776493 kubelet[3217]: W0123 23:59:52.776365 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.776493 kubelet[3217]: E0123 23:59:52.776380 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.776493 kubelet[3217]: I0123 23:59:52.776408 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1900a277-348f-4eb2-aa7c-7d2406a64ec8-kubelet-dir\") pod \"csi-node-driver-mmrrm\" (UID: \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\") " pod="calico-system/csi-node-driver-mmrrm" Jan 23 23:59:52.776742 kubelet[3217]: E0123 23:59:52.776729 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.776925 kubelet[3217]: W0123 23:59:52.776796 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.776925 kubelet[3217]: E0123 23:59:52.776813 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.776925 kubelet[3217]: I0123 23:59:52.776829 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1900a277-348f-4eb2-aa7c-7d2406a64ec8-socket-dir\") pod \"csi-node-driver-mmrrm\" (UID: \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\") " pod="calico-system/csi-node-driver-mmrrm" Jan 23 23:59:52.777190 kubelet[3217]: E0123 23:59:52.777122 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.777190 kubelet[3217]: W0123 23:59:52.777135 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.777190 kubelet[3217]: E0123 23:59:52.777145 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.777190 kubelet[3217]: I0123 23:59:52.777169 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1900a277-348f-4eb2-aa7c-7d2406a64ec8-varrun\") pod \"csi-node-driver-mmrrm\" (UID: \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\") " pod="calico-system/csi-node-driver-mmrrm" Jan 23 23:59:52.777678 kubelet[3217]: E0123 23:59:52.777661 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.777678 kubelet[3217]: W0123 23:59:52.777676 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.777867 kubelet[3217]: E0123 23:59:52.777689 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.778071 kubelet[3217]: E0123 23:59:52.778056 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.778110 kubelet[3217]: W0123 23:59:52.778071 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.778110 kubelet[3217]: E0123 23:59:52.778083 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.778470 kubelet[3217]: E0123 23:59:52.778455 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786560 kubelet[3217]: W0123 23:59:52.778474 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786560 kubelet[3217]: E0123 23:59:52.778486 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786560 kubelet[3217]: E0123 23:59:52.778908 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786560 kubelet[3217]: W0123 23:59:52.778920 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786560 kubelet[3217]: E0123 23:59:52.778931 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786560 kubelet[3217]: E0123 23:59:52.779451 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786560 kubelet[3217]: W0123 23:59:52.779477 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786560 kubelet[3217]: E0123 23:59:52.779490 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786560 kubelet[3217]: E0123 23:59:52.779810 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786560 kubelet[3217]: W0123 23:59:52.779821 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786800 kubelet[3217]: E0123 23:59:52.779832 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786800 kubelet[3217]: I0123 23:59:52.779861 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdkg\" (UniqueName: \"kubernetes.io/projected/1900a277-348f-4eb2-aa7c-7d2406a64ec8-kube-api-access-xrdkg\") pod \"csi-node-driver-mmrrm\" (UID: \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\") " pod="calico-system/csi-node-driver-mmrrm" Jan 23 23:59:52.786800 kubelet[3217]: E0123 23:59:52.780009 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786800 kubelet[3217]: W0123 23:59:52.780018 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786800 kubelet[3217]: E0123 23:59:52.780027 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786800 kubelet[3217]: E0123 23:59:52.780276 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786800 kubelet[3217]: W0123 23:59:52.780289 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786800 kubelet[3217]: E0123 23:59:52.780300 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786800 kubelet[3217]: E0123 23:59:52.780527 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786981 kubelet[3217]: W0123 23:59:52.780543 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786981 kubelet[3217]: E0123 23:59:52.780556 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786981 kubelet[3217]: E0123 23:59:52.780691 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786981 kubelet[3217]: W0123 23:59:52.780698 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786981 kubelet[3217]: E0123 23:59:52.780705 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.786981 kubelet[3217]: E0123 23:59:52.780855 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.786981 kubelet[3217]: W0123 23:59:52.780864 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.786981 kubelet[3217]: E0123 23:59:52.780872 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.816133 containerd[1736]: time="2026-01-23T23:59:52.815943373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:52.816133 containerd[1736]: time="2026-01-23T23:59:52.815994013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:52.816133 containerd[1736]: time="2026-01-23T23:59:52.816017893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:52.816133 containerd[1736]: time="2026-01-23T23:59:52.816085733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:52.837557 systemd[1]: Started cri-containerd-f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185.scope - libcontainer container f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185. Jan 23 23:59:52.859524 containerd[1736]: time="2026-01-23T23:59:52.859489037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l82sn,Uid:d67edd48-3871-4239-879d-04e2efb4e496,Namespace:calico-system,Attempt:0,} returns sandbox id \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\"" Jan 23 23:59:52.881520 kubelet[3217]: E0123 23:59:52.881420 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.881520 kubelet[3217]: W0123 23:59:52.881441 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.881520 kubelet[3217]: E0123 23:59:52.881458 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.882006 kubelet[3217]: E0123 23:59:52.881936 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.882006 kubelet[3217]: W0123 23:59:52.881948 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.882006 kubelet[3217]: E0123 23:59:52.881960 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.882243 kubelet[3217]: E0123 23:59:52.882230 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.882283 kubelet[3217]: W0123 23:59:52.882244 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.882283 kubelet[3217]: E0123 23:59:52.882257 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.882616 kubelet[3217]: E0123 23:59:52.882600 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.882616 kubelet[3217]: W0123 23:59:52.882615 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.882727 kubelet[3217]: E0123 23:59:52.882629 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.882985 kubelet[3217]: E0123 23:59:52.882970 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.883022 kubelet[3217]: W0123 23:59:52.882984 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.883022 kubelet[3217]: E0123 23:59:52.882999 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.883416 kubelet[3217]: E0123 23:59:52.883385 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.883416 kubelet[3217]: W0123 23:59:52.883405 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.883547 kubelet[3217]: E0123 23:59:52.883426 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.883742 kubelet[3217]: E0123 23:59:52.883727 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.883780 kubelet[3217]: W0123 23:59:52.883742 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.883780 kubelet[3217]: E0123 23:59:52.883753 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.884472 kubelet[3217]: E0123 23:59:52.884456 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.884542 kubelet[3217]: W0123 23:59:52.884474 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.884542 kubelet[3217]: E0123 23:59:52.884486 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.884686 kubelet[3217]: E0123 23:59:52.884674 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.884686 kubelet[3217]: W0123 23:59:52.884685 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.884758 kubelet[3217]: E0123 23:59:52.884695 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.884956 kubelet[3217]: E0123 23:59:52.884941 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.884956 kubelet[3217]: W0123 23:59:52.884955 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.885049 kubelet[3217]: E0123 23:59:52.884965 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.885183 kubelet[3217]: E0123 23:59:52.885170 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.885233 kubelet[3217]: W0123 23:59:52.885182 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.885233 kubelet[3217]: E0123 23:59:52.885193 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.886089 kubelet[3217]: E0123 23:59:52.886014 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.886089 kubelet[3217]: W0123 23:59:52.886029 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.886089 kubelet[3217]: E0123 23:59:52.886045 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.886499 kubelet[3217]: E0123 23:59:52.886483 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.886548 kubelet[3217]: W0123 23:59:52.886499 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.886548 kubelet[3217]: E0123 23:59:52.886513 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.887810 kubelet[3217]: E0123 23:59:52.887450 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.887810 kubelet[3217]: W0123 23:59:52.887470 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.887810 kubelet[3217]: E0123 23:59:52.887487 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.887810 kubelet[3217]: E0123 23:59:52.887767 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.887810 kubelet[3217]: W0123 23:59:52.887778 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.887810 kubelet[3217]: E0123 23:59:52.887788 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.888145 kubelet[3217]: E0123 23:59:52.888128 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.888219 kubelet[3217]: W0123 23:59:52.888156 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.888219 kubelet[3217]: E0123 23:59:52.888169 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.888338 kubelet[3217]: E0123 23:59:52.888327 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.888338 kubelet[3217]: W0123 23:59:52.888337 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.888453 kubelet[3217]: E0123 23:59:52.888346 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.888590 kubelet[3217]: E0123 23:59:52.888489 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.888590 kubelet[3217]: W0123 23:59:52.888499 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.888590 kubelet[3217]: E0123 23:59:52.888507 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.889188 kubelet[3217]: E0123 23:59:52.888944 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.889188 kubelet[3217]: W0123 23:59:52.889184 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.889452 kubelet[3217]: E0123 23:59:52.889199 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.889797 kubelet[3217]: E0123 23:59:52.889769 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.889797 kubelet[3217]: W0123 23:59:52.889788 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.889977 kubelet[3217]: E0123 23:59:52.889801 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.890349 kubelet[3217]: E0123 23:59:52.890328 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.890528 kubelet[3217]: W0123 23:59:52.890449 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.890528 kubelet[3217]: E0123 23:59:52.890469 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.890843 kubelet[3217]: E0123 23:59:52.890779 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.890843 kubelet[3217]: W0123 23:59:52.890792 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.890843 kubelet[3217]: E0123 23:59:52.890803 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.891270 kubelet[3217]: E0123 23:59:52.891114 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.891270 kubelet[3217]: W0123 23:59:52.891126 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.891270 kubelet[3217]: E0123 23:59:52.891137 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.891713 kubelet[3217]: E0123 23:59:52.891599 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.891713 kubelet[3217]: W0123 23:59:52.891617 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.891713 kubelet[3217]: E0123 23:59:52.891628 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.892027 kubelet[3217]: E0123 23:59:52.891985 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.892027 kubelet[3217]: W0123 23:59:52.891996 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.892027 kubelet[3217]: E0123 23:59:52.892006 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:52.898913 kubelet[3217]: E0123 23:59:52.898889 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:52.898913 kubelet[3217]: W0123 23:59:52.898908 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:52.898989 kubelet[3217]: E0123 23:59:52.898919 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:53.890333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554203495.mount: Deactivated successfully. Jan 23 23:59:54.305429 containerd[1736]: time="2026-01-23T23:59:54.305313207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:54.309355 containerd[1736]: time="2026-01-23T23:59:54.309325369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:59:54.311931 containerd[1736]: time="2026-01-23T23:59:54.311904930Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:54.316376 containerd[1736]: time="2026-01-23T23:59:54.316177853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:54.316812 containerd[1736]: time="2026-01-23T23:59:54.316785293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.693954983s" Jan 23 23:59:54.316855 containerd[1736]: time="2026-01-23T23:59:54.316812853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:59:54.318632 containerd[1736]: time="2026-01-23T23:59:54.318563214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:59:54.333530 containerd[1736]: time="2026-01-23T23:59:54.333341262Z" level=info msg="CreateContainer within sandbox \"7a15dbd6b22e54aedb3ade4dbedc4834797f65bd0e15596afebe24b5b881de5a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:59:54.368459 containerd[1736]: time="2026-01-23T23:59:54.368381321Z" level=info msg="CreateContainer within sandbox \"7a15dbd6b22e54aedb3ade4dbedc4834797f65bd0e15596afebe24b5b881de5a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69faf0a917dceadd54b630fda3ef1a7d7230a6127b3830264242164a8a4c8850\"" Jan 23 23:59:54.369149 containerd[1736]: time="2026-01-23T23:59:54.369128001Z" level=info msg="StartContainer for \"69faf0a917dceadd54b630fda3ef1a7d7230a6127b3830264242164a8a4c8850\"" Jan 23 23:59:54.411549 systemd[1]: Started cri-containerd-69faf0a917dceadd54b630fda3ef1a7d7230a6127b3830264242164a8a4c8850.scope - libcontainer container 69faf0a917dceadd54b630fda3ef1a7d7230a6127b3830264242164a8a4c8850. Jan 23 23:59:54.441785 containerd[1736]: time="2026-01-23T23:59:54.441640840Z" level=info msg="StartContainer for \"69faf0a917dceadd54b630fda3ef1a7d7230a6127b3830264242164a8a4c8850\" returns successfully" Jan 23 23:59:54.921965 kubelet[3217]: E0123 23:59:54.921926 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 23 23:59:55.077221 kubelet[3217]: E0123 23:59:55.077100 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.077221 kubelet[3217]: W0123 23:59:55.077122 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.077221 kubelet[3217]: E0123 23:59:55.077142 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.077643 kubelet[3217]: E0123 23:59:55.077316 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.077643 kubelet[3217]: W0123 23:59:55.077324 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.077643 kubelet[3217]: E0123 23:59:55.077370 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.077841 kubelet[3217]: E0123 23:59:55.077734 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.077841 kubelet[3217]: W0123 23:59:55.077745 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.077841 kubelet[3217]: E0123 23:59:55.077756 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.078104 kubelet[3217]: E0123 23:59:55.077996 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.078104 kubelet[3217]: W0123 23:59:55.078006 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.078104 kubelet[3217]: E0123 23:59:55.078015 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.078290 kubelet[3217]: E0123 23:59:55.078241 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.078290 kubelet[3217]: W0123 23:59:55.078251 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.078290 kubelet[3217]: E0123 23:59:55.078261 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.078590 kubelet[3217]: E0123 23:59:55.078508 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.078590 kubelet[3217]: W0123 23:59:55.078518 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.078590 kubelet[3217]: E0123 23:59:55.078528 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.078847 kubelet[3217]: E0123 23:59:55.078750 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.078847 kubelet[3217]: W0123 23:59:55.078760 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.078847 kubelet[3217]: E0123 23:59:55.078769 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.079048 kubelet[3217]: E0123 23:59:55.078986 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.079048 kubelet[3217]: W0123 23:59:55.078996 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.079048 kubelet[3217]: E0123 23:59:55.079005 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.079313 kubelet[3217]: E0123 23:59:55.079255 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.079313 kubelet[3217]: W0123 23:59:55.079266 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.079313 kubelet[3217]: E0123 23:59:55.079276 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.079678 kubelet[3217]: E0123 23:59:55.079587 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.079678 kubelet[3217]: W0123 23:59:55.079598 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.079678 kubelet[3217]: E0123 23:59:55.079608 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.079894 kubelet[3217]: E0123 23:59:55.079839 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.079894 kubelet[3217]: W0123 23:59:55.079849 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.079894 kubelet[3217]: E0123 23:59:55.079859 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.080234 kubelet[3217]: E0123 23:59:55.080110 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.080234 kubelet[3217]: W0123 23:59:55.080123 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.080234 kubelet[3217]: E0123 23:59:55.080132 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.080497 kubelet[3217]: E0123 23:59:55.080433 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.080497 kubelet[3217]: W0123 23:59:55.080443 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.080497 kubelet[3217]: E0123 23:59:55.080453 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.080859 kubelet[3217]: E0123 23:59:55.080800 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.080859 kubelet[3217]: W0123 23:59:55.080811 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.080859 kubelet[3217]: E0123 23:59:55.080822 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.081167 kubelet[3217]: E0123 23:59:55.081091 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.081167 kubelet[3217]: W0123 23:59:55.081103 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.081167 kubelet[3217]: E0123 23:59:55.081113 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.101655 kubelet[3217]: E0123 23:59:55.101618 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.101655 kubelet[3217]: W0123 23:59:55.101632 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.101850 kubelet[3217]: E0123 23:59:55.101751 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.102171 kubelet[3217]: E0123 23:59:55.102101 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.102171 kubelet[3217]: W0123 23:59:55.102124 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.102171 kubelet[3217]: E0123 23:59:55.102137 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.102376 kubelet[3217]: E0123 23:59:55.102360 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.102434 kubelet[3217]: W0123 23:59:55.102374 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.102434 kubelet[3217]: E0123 23:59:55.102403 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.103517 kubelet[3217]: E0123 23:59:55.103496 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.103566 kubelet[3217]: W0123 23:59:55.103521 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.103566 kubelet[3217]: E0123 23:59:55.103537 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.103719 kubelet[3217]: E0123 23:59:55.103707 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.103719 kubelet[3217]: W0123 23:59:55.103718 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.103849 kubelet[3217]: E0123 23:59:55.103728 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.103896 kubelet[3217]: E0123 23:59:55.103879 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.104028 kubelet[3217]: W0123 23:59:55.103898 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.104028 kubelet[3217]: E0123 23:59:55.103908 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.104109 kubelet[3217]: E0123 23:59:55.104060 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.104109 kubelet[3217]: W0123 23:59:55.104068 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.104109 kubelet[3217]: E0123 23:59:55.104076 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.104253 kubelet[3217]: E0123 23:59:55.104241 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.104253 kubelet[3217]: W0123 23:59:55.104251 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.104313 kubelet[3217]: E0123 23:59:55.104260 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.104656 kubelet[3217]: E0123 23:59:55.104540 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.104656 kubelet[3217]: W0123 23:59:55.104554 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.104656 kubelet[3217]: E0123 23:59:55.104571 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.105167 kubelet[3217]: E0123 23:59:55.105059 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.105167 kubelet[3217]: W0123 23:59:55.105104 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.105167 kubelet[3217]: E0123 23:59:55.105122 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.105630 kubelet[3217]: E0123 23:59:55.105617 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.105771 kubelet[3217]: W0123 23:59:55.105691 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.105771 kubelet[3217]: E0123 23:59:55.105708 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.106097 kubelet[3217]: E0123 23:59:55.106058 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.106097 kubelet[3217]: W0123 23:59:55.106070 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.107017 kubelet[3217]: E0123 23:59:55.106081 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.107449 kubelet[3217]: E0123 23:59:55.107249 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.107449 kubelet[3217]: W0123 23:59:55.107263 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.107449 kubelet[3217]: E0123 23:59:55.107276 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.107829 kubelet[3217]: E0123 23:59:55.107743 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.107829 kubelet[3217]: W0123 23:59:55.107768 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.107829 kubelet[3217]: E0123 23:59:55.107781 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.108254 kubelet[3217]: E0123 23:59:55.108145 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.108254 kubelet[3217]: W0123 23:59:55.108156 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.108254 kubelet[3217]: E0123 23:59:55.108179 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.108726 kubelet[3217]: E0123 23:59:55.108650 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.108726 kubelet[3217]: W0123 23:59:55.108665 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.108726 kubelet[3217]: E0123 23:59:55.108681 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.109480 kubelet[3217]: E0123 23:59:55.109458 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.109480 kubelet[3217]: W0123 23:59:55.109476 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.109607 kubelet[3217]: E0123 23:59:55.109502 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.109856 kubelet[3217]: E0123 23:59:55.109840 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:59:55.109856 kubelet[3217]: W0123 23:59:55.109854 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:59:55.109917 kubelet[3217]: E0123 23:59:55.109867 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:59:55.546260 containerd[1736]: time="2026-01-23T23:59:55.546125908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:55.549664 containerd[1736]: time="2026-01-23T23:59:55.549631270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:59:55.553761 containerd[1736]: time="2026-01-23T23:59:55.553730832Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:55.557431 containerd[1736]: time="2026-01-23T23:59:55.557327154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:55.558147 containerd[1736]: time="2026-01-23T23:59:55.557841674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.23924866s" Jan 23 23:59:55.558147 containerd[1736]: time="2026-01-23T23:59:55.557875834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:59:55.564968 containerd[1736]: time="2026-01-23T23:59:55.564937518Z" level=info msg="CreateContainer within sandbox \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:59:55.608296 containerd[1736]: time="2026-01-23T23:59:55.608128101Z" level=info msg="CreateContainer within sandbox \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c\"" Jan 23 23:59:55.609870 containerd[1736]: time="2026-01-23T23:59:55.609837982Z" level=info msg="StartContainer for \"1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c\"" Jan 23 23:59:55.643545 systemd[1]: Started cri-containerd-1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c.scope - libcontainer container 1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c. Jan 23 23:59:55.670788 containerd[1736]: time="2026-01-23T23:59:55.670610574Z" level=info msg="StartContainer for \"1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c\" returns successfully" Jan 23 23:59:55.678978 systemd[1]: cri-containerd-1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c.scope: Deactivated successfully. Jan 23 23:59:55.701917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c-rootfs.mount: Deactivated successfully. Jan 23 23:59:56.114916 kubelet[3217]: I0123 23:59:56.038279 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:59:56.114916 kubelet[3217]: I0123 23:59:56.058758 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c5c564976-57psl" podStartSLOduration=2.363186597 podStartE2EDuration="4.058741501s" podCreationTimestamp="2026-01-23 23:59:52 +0000 UTC" firstStartedPulling="2026-01-23 23:59:52.62234531 +0000 UTC m=+26.781763436" lastFinishedPulling="2026-01-23 23:59:54.317900214 +0000 UTC m=+28.477318340" observedRunningTime="2026-01-23 23:59:55.055745927 +0000 UTC m=+29.215164053" watchObservedRunningTime="2026-01-23 23:59:56.058741501 +0000 UTC m=+30.218159627" Jan 23 23:59:56.732130 containerd[1736]: time="2026-01-23T23:59:56.731931780Z" level=info msg="shim disconnected" id=1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c namespace=k8s.io Jan 23 23:59:56.732130 containerd[1736]: time="2026-01-23T23:59:56.731984380Z" level=warning msg="cleaning up after shim disconnected" id=1b9ace62d15833d5ed8aed81be6aa9330e8df69a7529fef6b99e60a40ecdf82c namespace=k8s.io Jan 23 23:59:56.732130 containerd[1736]: time="2026-01-23T23:59:56.731994500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:56.921885 kubelet[3217]: E0123 23:59:56.921632 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 23 23:59:57.043124 containerd[1736]: time="2026-01-23T23:59:57.042982146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:59:58.921809 kubelet[3217]: E0123 23:59:58.921771 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 23 23:59:59.295259 containerd[1736]: time="2026-01-23T23:59:59.295145264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:59.297819 containerd[1736]: time="2026-01-23T23:59:59.297673105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:59:59.300520 containerd[1736]: time="2026-01-23T23:59:59.300255426Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:59.304921 containerd[1736]: time="2026-01-23T23:59:59.304895789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:59.305747 containerd[1736]: time="2026-01-23T23:59:59.305711469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.262680683s" Jan 23 23:59:59.305905 containerd[1736]: time="2026-01-23T23:59:59.305877789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:59:59.321143 containerd[1736]: time="2026-01-23T23:59:59.321104278Z" level=info msg="CreateContainer within sandbox \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:59:59.360285 containerd[1736]: time="2026-01-23T23:59:59.359761578Z" level=info msg="CreateContainer within sandbox \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be\"" Jan 23 23:59:59.362038 containerd[1736]: time="2026-01-23T23:59:59.360595979Z" level=info msg="StartContainer for \"526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be\"" Jan 23 23:59:59.385751 systemd[1]: run-containerd-runc-k8s.io-526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be-runc.trZAIL.mount: Deactivated successfully. Jan 23 23:59:59.399557 systemd[1]: Started cri-containerd-526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be.scope - libcontainer container 526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be. Jan 23 23:59:59.428304 containerd[1736]: time="2026-01-23T23:59:59.428085894Z" level=info msg="StartContainer for \"526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be\" returns successfully" Jan 23 23:59:59.524133 kubelet[3217]: I0123 23:59:59.523851 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:00:00.921029 kubelet[3217]: E0124 00:00:00.920966 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:00.947966 containerd[1736]: time="2026-01-24T00:00:00.947789461Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:00:00.950755 systemd[1]: cri-containerd-526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be.scope: Deactivated successfully. Jan 24 00:00:00.960216 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 24 00:00:00.970070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be-rootfs.mount: Deactivated successfully. Jan 24 00:00:00.971445 systemd[1]: logrotate.service: Deactivated successfully. Jan 24 00:00:00.984420 kubelet[3217]: I0124 00:00:00.983732 3217 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:00:02.047508 systemd[1]: Created slice kubepods-besteffort-pod92b832a2_d5a5_4983_bbcd_e800e9df0595.slice - libcontainer container kubepods-besteffort-pod92b832a2_d5a5_4983_bbcd_e800e9df0595.slice. Jan 24 00:00:02.050086 containerd[1736]: time="2026-01-24T00:00:02.049599166Z" level=info msg="shim disconnected" id=526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be namespace=k8s.io Jan 24 00:00:02.050386 containerd[1736]: time="2026-01-24T00:00:02.050090407Z" level=warning msg="cleaning up after shim disconnected" id=526de214653b427bec03e8b627b560cb1871c04d35f508d1b74eca1349e971be namespace=k8s.io Jan 24 00:00:02.050386 containerd[1736]: time="2026-01-24T00:00:02.050103207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:00:02.073998 systemd[1]: Created slice kubepods-burstable-pod8cabee2a_2179_450d_babf_843d70721def.slice - libcontainer container kubepods-burstable-pod8cabee2a_2179_450d_babf_843d70721def.slice. Jan 24 00:00:02.085568 systemd[1]: Created slice kubepods-burstable-pod2024f333_ad36_464d_817d_816658048dd9.slice - libcontainer container kubepods-burstable-pod2024f333_ad36_464d_817d_816658048dd9.slice. Jan 24 00:00:02.090680 systemd[1]: Created slice kubepods-besteffort-pod9390c20d_0be8_4dfe_954e_634e25852cb2.slice - libcontainer container kubepods-besteffort-pod9390c20d_0be8_4dfe_954e_634e25852cb2.slice. Jan 24 00:00:02.103199 systemd[1]: Created slice kubepods-besteffort-pod237e41c6_ec2d_4a8d_bb7d_ca837318e8f7.slice - libcontainer container kubepods-besteffort-pod237e41c6_ec2d_4a8d_bb7d_ca837318e8f7.slice. Jan 24 00:00:02.112584 systemd[1]: Created slice kubepods-besteffort-pod37f635e6_9d73_41e3_ac25_e030d9b2101d.slice - libcontainer container kubepods-besteffort-pod37f635e6_9d73_41e3_ac25_e030d9b2101d.slice. Jan 24 00:00:02.118129 systemd[1]: Created slice kubepods-besteffort-poda37d52d3_c228_4df6_b0fc_c5d23ff527d2.slice - libcontainer container kubepods-besteffort-poda37d52d3_c228_4df6_b0fc_c5d23ff527d2.slice. Jan 24 00:00:02.148644 kubelet[3217]: I0124 00:00:02.148608 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/237e41c6-ec2d-4a8d-bb7d-ca837318e8f7-calico-apiserver-certs\") pod \"calico-apiserver-5dd9d484d4-bprcl\" (UID: \"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7\") " pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" Jan 24 00:00:02.149026 kubelet[3217]: I0124 00:00:02.148653 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-backend-key-pair\") pod \"whisker-77487d64ff-bmshx\" (UID: \"92b832a2-d5a5-4983-bbcd-e800e9df0595\") " pod="calico-system/whisker-77487d64ff-bmshx" Jan 24 00:00:02.149026 kubelet[3217]: I0124 00:00:02.148672 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cabee2a-2179-450d-babf-843d70721def-config-volume\") pod \"coredns-674b8bbfcf-gjxrx\" (UID: \"8cabee2a-2179-450d-babf-843d70721def\") " pod="kube-system/coredns-674b8bbfcf-gjxrx" Jan 24 00:00:02.149026 kubelet[3217]: I0124 00:00:02.148692 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8ck\" (UniqueName: \"kubernetes.io/projected/2024f333-ad36-464d-817d-816658048dd9-kube-api-access-hg8ck\") pod \"coredns-674b8bbfcf-bjd2b\" (UID: \"2024f333-ad36-464d-817d-816658048dd9\") " pod="kube-system/coredns-674b8bbfcf-bjd2b" Jan 24 00:00:02.149026 kubelet[3217]: I0124 00:00:02.148720 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8289b\" (UniqueName: \"kubernetes.io/projected/8cabee2a-2179-450d-babf-843d70721def-kube-api-access-8289b\") pod \"coredns-674b8bbfcf-gjxrx\" (UID: \"8cabee2a-2179-450d-babf-843d70721def\") " pod="kube-system/coredns-674b8bbfcf-gjxrx" Jan 24 00:00:02.149026 kubelet[3217]: I0124 00:00:02.148736 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcqv\" (UniqueName: \"kubernetes.io/projected/37f635e6-9d73-41e3-ac25-e030d9b2101d-kube-api-access-lwcqv\") pod \"calico-apiserver-5dd9d484d4-qgr74\" (UID: \"37f635e6-9d73-41e3-ac25-e030d9b2101d\") " pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" Jan 24 00:00:02.149149 kubelet[3217]: I0124 00:00:02.148766 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-ca-bundle\") pod \"whisker-77487d64ff-bmshx\" (UID: \"92b832a2-d5a5-4983-bbcd-e800e9df0595\") " pod="calico-system/whisker-77487d64ff-bmshx" Jan 24 00:00:02.149149 kubelet[3217]: I0124 00:00:02.148782 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7v9\" (UniqueName: \"kubernetes.io/projected/92b832a2-d5a5-4983-bbcd-e800e9df0595-kube-api-access-bd7v9\") pod \"whisker-77487d64ff-bmshx\" (UID: \"92b832a2-d5a5-4983-bbcd-e800e9df0595\") " pod="calico-system/whisker-77487d64ff-bmshx" Jan 24 00:00:02.149149 kubelet[3217]: I0124 00:00:02.148803 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4fp\" (UniqueName: \"kubernetes.io/projected/237e41c6-ec2d-4a8d-bb7d-ca837318e8f7-kube-api-access-sz4fp\") pod \"calico-apiserver-5dd9d484d4-bprcl\" (UID: \"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7\") " pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" Jan 24 00:00:02.149149 kubelet[3217]: I0124 00:00:02.148820 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37f635e6-9d73-41e3-ac25-e030d9b2101d-calico-apiserver-certs\") pod \"calico-apiserver-5dd9d484d4-qgr74\" (UID: \"37f635e6-9d73-41e3-ac25-e030d9b2101d\") " pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" Jan 24 00:00:02.149149 kubelet[3217]: I0124 00:00:02.148837 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2024f333-ad36-464d-817d-816658048dd9-config-volume\") pod \"coredns-674b8bbfcf-bjd2b\" (UID: \"2024f333-ad36-464d-817d-816658048dd9\") " pod="kube-system/coredns-674b8bbfcf-bjd2b" Jan 24 00:00:02.149256 kubelet[3217]: I0124 00:00:02.148853 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9390c20d-0be8-4dfe-954e-634e25852cb2-goldmane-ca-bundle\") pod \"goldmane-666569f655-65kmp\" (UID: \"9390c20d-0be8-4dfe-954e-634e25852cb2\") " pod="calico-system/goldmane-666569f655-65kmp" Jan 24 00:00:02.149256 kubelet[3217]: I0124 00:00:02.148873 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a37d52d3-c228-4df6-b0fc-c5d23ff527d2-tigera-ca-bundle\") pod \"calico-kube-controllers-754bb44d48-hhlr2\" (UID: \"a37d52d3-c228-4df6-b0fc-c5d23ff527d2\") " pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" Jan 24 00:00:02.149256 kubelet[3217]: I0124 00:00:02.148887 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjlsc\" (UniqueName: \"kubernetes.io/projected/a37d52d3-c228-4df6-b0fc-c5d23ff527d2-kube-api-access-hjlsc\") pod \"calico-kube-controllers-754bb44d48-hhlr2\" (UID: \"a37d52d3-c228-4df6-b0fc-c5d23ff527d2\") " pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" Jan 24 00:00:02.149256 kubelet[3217]: I0124 00:00:02.148924 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9390c20d-0be8-4dfe-954e-634e25852cb2-config\") pod \"goldmane-666569f655-65kmp\" (UID: \"9390c20d-0be8-4dfe-954e-634e25852cb2\") " pod="calico-system/goldmane-666569f655-65kmp" Jan 24 00:00:02.149256 kubelet[3217]: I0124 00:00:02.148943 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9390c20d-0be8-4dfe-954e-634e25852cb2-goldmane-key-pair\") pod \"goldmane-666569f655-65kmp\" (UID: \"9390c20d-0be8-4dfe-954e-634e25852cb2\") " pod="calico-system/goldmane-666569f655-65kmp" Jan 24 00:00:02.149364 kubelet[3217]: I0124 00:00:02.148958 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpv7\" (UniqueName: \"kubernetes.io/projected/9390c20d-0be8-4dfe-954e-634e25852cb2-kube-api-access-7xpv7\") pod \"goldmane-666569f655-65kmp\" (UID: \"9390c20d-0be8-4dfe-954e-634e25852cb2\") " pod="calico-system/goldmane-666569f655-65kmp" Jan 24 00:00:02.353745 containerd[1736]: time="2026-01-24T00:00:02.353635728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77487d64ff-bmshx,Uid:92b832a2-d5a5-4983-bbcd-e800e9df0595,Namespace:calico-system,Attempt:0,}" Jan 24 00:00:02.387812 containerd[1736]: time="2026-01-24T00:00:02.387458466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gjxrx,Uid:8cabee2a-2179-450d-babf-843d70721def,Namespace:kube-system,Attempt:0,}" Jan 24 00:00:02.391556 containerd[1736]: time="2026-01-24T00:00:02.391524828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjd2b,Uid:2024f333-ad36-464d-817d-816658048dd9,Namespace:kube-system,Attempt:0,}" Jan 24 00:00:02.395762 containerd[1736]: time="2026-01-24T00:00:02.395721590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65kmp,Uid:9390c20d-0be8-4dfe-954e-634e25852cb2,Namespace:calico-system,Attempt:0,}" Jan 24 00:00:02.408193 containerd[1736]: time="2026-01-24T00:00:02.407967077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-bprcl,Uid:237e41c6-ec2d-4a8d-bb7d-ca837318e8f7,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:00:02.417510 containerd[1736]: time="2026-01-24T00:00:02.417318162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-qgr74,Uid:37f635e6-9d73-41e3-ac25-e030d9b2101d,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:00:02.422358 containerd[1736]: time="2026-01-24T00:00:02.422318604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754bb44d48-hhlr2,Uid:a37d52d3-c228-4df6-b0fc-c5d23ff527d2,Namespace:calico-system,Attempt:0,}" Jan 24 00:00:02.469107 containerd[1736]: time="2026-01-24T00:00:02.469064429Z" level=error msg="Failed to destroy network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.469686 containerd[1736]: time="2026-01-24T00:00:02.469535429Z" level=error msg="encountered an error cleaning up failed sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.469686 containerd[1736]: time="2026-01-24T00:00:02.469588349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77487d64ff-bmshx,Uid:92b832a2-d5a5-4983-bbcd-e800e9df0595,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.470503 kubelet[3217]: E0124 00:00:02.469822 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.470503 kubelet[3217]: E0124 00:00:02.469895 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77487d64ff-bmshx" Jan 24 00:00:02.470503 kubelet[3217]: E0124 00:00:02.469915 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77487d64ff-bmshx" Jan 24 00:00:02.470602 kubelet[3217]: E0124 00:00:02.469977 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77487d64ff-bmshx_calico-system(92b832a2-d5a5-4983-bbcd-e800e9df0595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77487d64ff-bmshx_calico-system(92b832a2-d5a5-4983-bbcd-e800e9df0595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77487d64ff-bmshx" podUID="92b832a2-d5a5-4983-bbcd-e800e9df0595" Jan 24 00:00:02.629093 containerd[1736]: time="2026-01-24T00:00:02.628815354Z" level=error msg="Failed to destroy network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.629540 containerd[1736]: time="2026-01-24T00:00:02.629489234Z" level=error msg="encountered an error cleaning up failed sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.630735 containerd[1736]: time="2026-01-24T00:00:02.630484595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gjxrx,Uid:8cabee2a-2179-450d-babf-843d70721def,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.631609 kubelet[3217]: E0124 00:00:02.631185 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.631609 kubelet[3217]: E0124 00:00:02.631249 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gjxrx" Jan 24 00:00:02.631609 kubelet[3217]: E0124 00:00:02.631267 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gjxrx" Jan 24 00:00:02.631779 kubelet[3217]: E0124 00:00:02.631316 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gjxrx_kube-system(8cabee2a-2179-450d-babf-843d70721def)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gjxrx_kube-system(8cabee2a-2179-450d-babf-843d70721def)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gjxrx" podUID="8cabee2a-2179-450d-babf-843d70721def" Jan 24 00:00:02.716426 containerd[1736]: time="2026-01-24T00:00:02.716358080Z" level=error msg="Failed to destroy network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.716783 containerd[1736]: time="2026-01-24T00:00:02.716751801Z" level=error msg="encountered an error cleaning up failed sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.716831 containerd[1736]: time="2026-01-24T00:00:02.716816241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjd2b,Uid:2024f333-ad36-464d-817d-816658048dd9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.717079 kubelet[3217]: E0124 00:00:02.717035 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.717133 kubelet[3217]: E0124 00:00:02.717098 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjd2b" Jan 24 00:00:02.717133 kubelet[3217]: E0124 00:00:02.717120 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bjd2b" Jan 24 00:00:02.717195 kubelet[3217]: E0124 00:00:02.717168 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bjd2b_kube-system(2024f333-ad36-464d-817d-816658048dd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bjd2b_kube-system(2024f333-ad36-464d-817d-816658048dd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bjd2b" podUID="2024f333-ad36-464d-817d-816658048dd9" Jan 24 00:00:02.734522 containerd[1736]: time="2026-01-24T00:00:02.734472730Z" level=error msg="Failed to destroy network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.734989 containerd[1736]: time="2026-01-24T00:00:02.734963570Z" level=error msg="encountered an error cleaning up failed sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.735108 containerd[1736]: time="2026-01-24T00:00:02.735086210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65kmp,Uid:9390c20d-0be8-4dfe-954e-634e25852cb2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.735446 kubelet[3217]: E0124 00:00:02.735387 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.735519 kubelet[3217]: E0124 00:00:02.735465 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-65kmp" Jan 24 00:00:02.735519 kubelet[3217]: E0124 00:00:02.735496 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-65kmp" Jan 24 00:00:02.737827 kubelet[3217]: E0124 00:00:02.737242 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-65kmp_calico-system(9390c20d-0be8-4dfe-954e-634e25852cb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-65kmp_calico-system(9390c20d-0be8-4dfe-954e-634e25852cb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:00:02.743609 containerd[1736]: time="2026-01-24T00:00:02.743561175Z" level=error msg="Failed to destroy network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.745212 containerd[1736]: time="2026-01-24T00:00:02.745169936Z" level=error msg="encountered an error cleaning up failed sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.746856 containerd[1736]: time="2026-01-24T00:00:02.745275216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-bprcl,Uid:237e41c6-ec2d-4a8d-bb7d-ca837318e8f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.746964 kubelet[3217]: E0124 00:00:02.745705 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.746964 kubelet[3217]: E0124 00:00:02.745757 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" Jan 24 00:00:02.746964 kubelet[3217]: E0124 00:00:02.745777 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" Jan 24 00:00:02.747049 kubelet[3217]: E0124 00:00:02.745829 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd9d484d4-bprcl_calico-apiserver(237e41c6-ec2d-4a8d-bb7d-ca837318e8f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd9d484d4-bprcl_calico-apiserver(237e41c6-ec2d-4a8d-bb7d-ca837318e8f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:00:02.749087 containerd[1736]: time="2026-01-24T00:00:02.748942218Z" level=error msg="Failed to destroy network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.749367 containerd[1736]: time="2026-01-24T00:00:02.749339298Z" level=error msg="encountered an error cleaning up failed sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.749559 containerd[1736]: time="2026-01-24T00:00:02.749466538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754bb44d48-hhlr2,Uid:a37d52d3-c228-4df6-b0fc-c5d23ff527d2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.749683 kubelet[3217]: E0124 00:00:02.749652 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.749737 kubelet[3217]: E0124 00:00:02.749699 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" Jan 24 00:00:02.749737 kubelet[3217]: E0124 00:00:02.749720 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" Jan 24 00:00:02.749793 kubelet[3217]: E0124 00:00:02.749774 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-754bb44d48-hhlr2_calico-system(a37d52d3-c228-4df6-b0fc-c5d23ff527d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-754bb44d48-hhlr2_calico-system(a37d52d3-c228-4df6-b0fc-c5d23ff527d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:00:02.750261 containerd[1736]: time="2026-01-24T00:00:02.750228578Z" level=error msg="Failed to destroy network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.750517 containerd[1736]: time="2026-01-24T00:00:02.750489418Z" level=error msg="encountered an error cleaning up failed sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.750566 containerd[1736]: time="2026-01-24T00:00:02.750538018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-qgr74,Uid:37f635e6-9d73-41e3-ac25-e030d9b2101d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.750713 kubelet[3217]: E0124 00:00:02.750688 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.750882 kubelet[3217]: E0124 00:00:02.750784 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" Jan 24 00:00:02.750882 kubelet[3217]: E0124 00:00:02.750808 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" Jan 24 00:00:02.750882 kubelet[3217]: E0124 00:00:02.750847 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd9d484d4-qgr74_calico-apiserver(37f635e6-9d73-41e3-ac25-e030d9b2101d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd9d484d4-qgr74_calico-apiserver(37f635e6-9d73-41e3-ac25-e030d9b2101d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:00:02.926125 systemd[1]: Created slice kubepods-besteffort-pod1900a277_348f_4eb2_aa7c_7d2406a64ec8.slice - libcontainer container kubepods-besteffort-pod1900a277_348f_4eb2_aa7c_7d2406a64ec8.slice. Jan 24 00:00:02.929329 containerd[1736]: time="2026-01-24T00:00:02.929293833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmrrm,Uid:1900a277-348f-4eb2-aa7c-7d2406a64ec8,Namespace:calico-system,Attempt:0,}" Jan 24 00:00:02.989060 containerd[1736]: time="2026-01-24T00:00:02.989012665Z" level=error msg="Failed to destroy network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.989376 containerd[1736]: time="2026-01-24T00:00:02.989340385Z" level=error msg="encountered an error cleaning up failed sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.989449 containerd[1736]: time="2026-01-24T00:00:02.989426985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmrrm,Uid:1900a277-348f-4eb2-aa7c-7d2406a64ec8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.990039 kubelet[3217]: E0124 00:00:02.989606 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:02.990039 kubelet[3217]: E0124 00:00:02.989658 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmrrm" Jan 24 00:00:02.990039 kubelet[3217]: E0124 00:00:02.989682 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmrrm" Jan 24 00:00:02.990160 kubelet[3217]: E0124 00:00:02.989723 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:03.059021 containerd[1736]: time="2026-01-24T00:00:03.058985222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:00:03.060419 kubelet[3217]: I0124 00:00:03.059654 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:03.061610 containerd[1736]: time="2026-01-24T00:00:03.061577024Z" level=info msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" Jan 24 00:00:03.061772 containerd[1736]: time="2026-01-24T00:00:03.061737744Z" level=info msg="Ensure that sandbox 4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e in task-service has been cleanup successfully" Jan 24 00:00:03.064019 kubelet[3217]: I0124 00:00:03.063897 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:03.065031 containerd[1736]: time="2026-01-24T00:00:03.064846625Z" level=info msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" Jan 24 00:00:03.066308 containerd[1736]: time="2026-01-24T00:00:03.065886786Z" level=info msg="Ensure that sandbox ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d in task-service has been cleanup successfully" Jan 24 00:00:03.070228 kubelet[3217]: I0124 00:00:03.070009 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:03.077359 containerd[1736]: time="2026-01-24T00:00:03.075926471Z" level=info msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" Jan 24 00:00:03.077359 containerd[1736]: time="2026-01-24T00:00:03.076083111Z" level=info msg="Ensure that sandbox 8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985 in task-service has been cleanup successfully" Jan 24 00:00:03.077359 containerd[1736]: time="2026-01-24T00:00:03.076470792Z" level=info msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" Jan 24 00:00:03.077359 containerd[1736]: time="2026-01-24T00:00:03.076598232Z" level=info msg="Ensure that sandbox 7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf in task-service has been cleanup successfully" Jan 24 00:00:03.077515 kubelet[3217]: I0124 00:00:03.076034 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:03.083922 containerd[1736]: time="2026-01-24T00:00:03.083869595Z" level=info msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" Jan 24 00:00:03.084029 containerd[1736]: time="2026-01-24T00:00:03.084008916Z" level=info msg="Ensure that sandbox ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5 in task-service has been cleanup successfully" Jan 24 00:00:03.084200 kubelet[3217]: I0124 00:00:03.083457 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:03.090202 kubelet[3217]: I0124 00:00:03.089238 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:03.091282 containerd[1736]: time="2026-01-24T00:00:03.091081439Z" level=info msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" Jan 24 00:00:03.092047 containerd[1736]: time="2026-01-24T00:00:03.091864640Z" level=info msg="Ensure that sandbox 3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3 in task-service has been cleanup successfully" Jan 24 00:00:03.098257 kubelet[3217]: I0124 00:00:03.098209 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:03.098767 containerd[1736]: time="2026-01-24T00:00:03.098669003Z" level=info msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" Jan 24 00:00:03.099558 containerd[1736]: time="2026-01-24T00:00:03.099488124Z" level=info msg="Ensure that sandbox 408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869 in task-service has been cleanup successfully" Jan 24 00:00:03.112798 kubelet[3217]: I0124 00:00:03.112762 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:03.113829 containerd[1736]: time="2026-01-24T00:00:03.113791571Z" level=info msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" Jan 24 00:00:03.113980 containerd[1736]: time="2026-01-24T00:00:03.113943251Z" level=info msg="Ensure that sandbox 88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121 in task-service has been cleanup successfully" Jan 24 00:00:03.165292 containerd[1736]: time="2026-01-24T00:00:03.165119759Z" level=error msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" failed" error="failed to destroy network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.165292 containerd[1736]: time="2026-01-24T00:00:03.165259079Z" level=error msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" failed" error="failed to destroy network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.165596 kubelet[3217]: E0124 00:00:03.165556 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:03.165860 kubelet[3217]: E0124 00:00:03.165613 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e"} Jan 24 00:00:03.165860 kubelet[3217]: E0124 00:00:03.165678 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a37d52d3-c228-4df6-b0fc-c5d23ff527d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.165860 kubelet[3217]: E0124 00:00:03.165696 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a37d52d3-c228-4df6-b0fc-c5d23ff527d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:00:03.165860 kubelet[3217]: E0124 00:00:03.165733 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:03.165860 kubelet[3217]: E0124 00:00:03.165749 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d"} Jan 24 00:00:03.166018 kubelet[3217]: E0124 00:00:03.165766 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.166018 kubelet[3217]: E0124 00:00:03.165784 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77487d64ff-bmshx" podUID="92b832a2-d5a5-4983-bbcd-e800e9df0595" Jan 24 00:00:03.183765 containerd[1736]: time="2026-01-24T00:00:03.182505088Z" level=error msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" failed" error="failed to destroy network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.183845 kubelet[3217]: E0124 00:00:03.182696 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:03.183845 kubelet[3217]: E0124 00:00:03.182743 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5"} Jan 24 00:00:03.183845 kubelet[3217]: E0124 00:00:03.182772 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2024f333-ad36-464d-817d-816658048dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.183845 kubelet[3217]: E0124 00:00:03.182793 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2024f333-ad36-464d-817d-816658048dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bjd2b" podUID="2024f333-ad36-464d-817d-816658048dd9" Jan 24 00:00:03.184514 containerd[1736]: time="2026-01-24T00:00:03.184083449Z" level=error msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" failed" error="failed to destroy network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.184601 kubelet[3217]: E0124 00:00:03.184236 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:03.184601 kubelet[3217]: E0124 00:00:03.184265 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3"} Jan 24 00:00:03.184601 kubelet[3217]: E0124 00:00:03.184285 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37f635e6-9d73-41e3-ac25-e030d9b2101d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.184601 kubelet[3217]: E0124 00:00:03.184316 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37f635e6-9d73-41e3-ac25-e030d9b2101d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:00:03.193450 containerd[1736]: time="2026-01-24T00:00:03.193410054Z" level=error msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" failed" error="failed to destroy network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.193769 kubelet[3217]: E0124 00:00:03.193737 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:03.193808 kubelet[3217]: E0124 00:00:03.193787 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985"} Jan 24 00:00:03.193838 kubelet[3217]: E0124 00:00:03.193814 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.193879 kubelet[3217]: E0124 00:00:03.193834 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:03.194594 containerd[1736]: time="2026-01-24T00:00:03.194561054Z" level=error msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" failed" error="failed to destroy network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.194832 kubelet[3217]: E0124 00:00:03.194698 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:03.194832 kubelet[3217]: E0124 00:00:03.194728 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf"} Jan 24 00:00:03.194832 kubelet[3217]: E0124 00:00:03.194751 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.194832 kubelet[3217]: E0124 00:00:03.194767 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:00:03.195845 containerd[1736]: time="2026-01-24T00:00:03.195805535Z" level=error msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" failed" error="failed to destroy network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.195983 kubelet[3217]: E0124 00:00:03.195956 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:03.196023 kubelet[3217]: E0124 00:00:03.195989 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869"} Jan 24 00:00:03.196023 kubelet[3217]: E0124 00:00:03.196011 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9390c20d-0be8-4dfe-954e-634e25852cb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.196085 kubelet[3217]: E0124 00:00:03.196027 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9390c20d-0be8-4dfe-954e-634e25852cb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:00:03.199051 containerd[1736]: time="2026-01-24T00:00:03.199020177Z" level=error msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" failed" error="failed to destroy network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:03.199228 kubelet[3217]: E0124 00:00:03.199200 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:03.199269 kubelet[3217]: E0124 00:00:03.199245 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121"} Jan 24 00:00:03.199293 kubelet[3217]: E0124 00:00:03.199266 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cabee2a-2179-450d-babf-843d70721def\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:03.199293 kubelet[3217]: E0124 00:00:03.199284 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cabee2a-2179-450d-babf-843d70721def\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gjxrx" podUID="8cabee2a-2179-450d-babf-843d70721def" Jan 24 00:00:13.922361 containerd[1736]: time="2026-01-24T00:00:13.922058940Z" level=info msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" Jan 24 00:00:13.943402 containerd[1736]: time="2026-01-24T00:00:13.943331511Z" level=error msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" failed" error="failed to destroy network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:13.943599 kubelet[3217]: E0124 00:00:13.943556 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:13.943865 kubelet[3217]: E0124 00:00:13.943610 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d"} Jan 24 00:00:13.943865 kubelet[3217]: E0124 00:00:13.943646 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:13.943865 kubelet[3217]: E0124 00:00:13.943667 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77487d64ff-bmshx" podUID="92b832a2-d5a5-4983-bbcd-e800e9df0595" Jan 24 00:00:14.922062 containerd[1736]: time="2026-01-24T00:00:14.921943172Z" level=info msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" Jan 24 00:00:14.943621 containerd[1736]: time="2026-01-24T00:00:14.943513822Z" level=error msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" failed" error="failed to destroy network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:14.944159 kubelet[3217]: E0124 00:00:14.943717 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:14.944159 kubelet[3217]: E0124 00:00:14.943766 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985"} Jan 24 00:00:14.944159 kubelet[3217]: E0124 00:00:14.943796 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:14.944159 kubelet[3217]: E0124 00:00:14.943817 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1900a277-348f-4eb2-aa7c-7d2406a64ec8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:15.924982 containerd[1736]: time="2026-01-24T00:00:15.924679145Z" level=info msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" Jan 24 00:00:15.925207 containerd[1736]: time="2026-01-24T00:00:15.925173025Z" level=info msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" Jan 24 00:00:15.927846 containerd[1736]: time="2026-01-24T00:00:15.926903506Z" level=info msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" Jan 24 00:00:15.967566 containerd[1736]: time="2026-01-24T00:00:15.967518884Z" level=error msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" failed" error="failed to destroy network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:15.969585 kubelet[3217]: E0124 00:00:15.968029 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:15.969585 kubelet[3217]: E0124 00:00:15.968080 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869"} Jan 24 00:00:15.969585 kubelet[3217]: E0124 00:00:15.968371 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9390c20d-0be8-4dfe-954e-634e25852cb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:15.969585 kubelet[3217]: E0124 00:00:15.968411 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9390c20d-0be8-4dfe-954e-634e25852cb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:00:15.975417 containerd[1736]: time="2026-01-24T00:00:15.975366048Z" level=error msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" failed" error="failed to destroy network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:15.975728 kubelet[3217]: E0124 00:00:15.975553 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:15.975728 kubelet[3217]: E0124 00:00:15.975591 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3"} Jan 24 00:00:15.975728 kubelet[3217]: E0124 00:00:15.975630 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37f635e6-9d73-41e3-ac25-e030d9b2101d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:15.975728 kubelet[3217]: E0124 00:00:15.975650 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37f635e6-9d73-41e3-ac25-e030d9b2101d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:00:15.980690 containerd[1736]: time="2026-01-24T00:00:15.980024370Z" level=error msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" failed" error="failed to destroy network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:15.980784 kubelet[3217]: E0124 00:00:15.980194 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:15.980784 kubelet[3217]: E0124 00:00:15.980231 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e"} Jan 24 00:00:15.980784 kubelet[3217]: E0124 00:00:15.980270 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a37d52d3-c228-4df6-b0fc-c5d23ff527d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:15.980784 kubelet[3217]: E0124 00:00:15.980297 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a37d52d3-c228-4df6-b0fc-c5d23ff527d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:00:16.924055 containerd[1736]: time="2026-01-24T00:00:16.923503556Z" level=info msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" Jan 24 00:00:16.924055 containerd[1736]: time="2026-01-24T00:00:16.923759077Z" level=info msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" Jan 24 00:00:16.953714 containerd[1736]: time="2026-01-24T00:00:16.953573330Z" level=error msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" failed" error="failed to destroy network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:16.954119 kubelet[3217]: E0124 00:00:16.953954 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:16.954186 kubelet[3217]: E0124 00:00:16.954134 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf"} Jan 24 00:00:16.954186 kubelet[3217]: E0124 00:00:16.954166 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:16.954291 kubelet[3217]: E0124 00:00:16.954187 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:00:16.965074 containerd[1736]: time="2026-01-24T00:00:16.964981135Z" level=error msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" failed" error="failed to destroy network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:16.965208 kubelet[3217]: E0124 00:00:16.965161 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:16.965250 kubelet[3217]: E0124 00:00:16.965206 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5"} Jan 24 00:00:16.965250 kubelet[3217]: E0124 00:00:16.965235 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2024f333-ad36-464d-817d-816658048dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:16.965602 kubelet[3217]: E0124 00:00:16.965255 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2024f333-ad36-464d-817d-816658048dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bjd2b" podUID="2024f333-ad36-464d-817d-816658048dd9" Jan 24 00:00:17.925253 containerd[1736]: time="2026-01-24T00:00:17.925208769Z" level=info msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" Jan 24 00:00:17.958129 containerd[1736]: time="2026-01-24T00:00:17.958078584Z" level=error msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" failed" error="failed to destroy network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:00:17.958341 kubelet[3217]: E0124 00:00:17.958286 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:17.959164 kubelet[3217]: E0124 00:00:17.958358 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121"} Jan 24 00:00:17.959164 kubelet[3217]: E0124 00:00:17.958398 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cabee2a-2179-450d-babf-843d70721def\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:00:17.959164 kubelet[3217]: E0124 00:00:17.958421 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cabee2a-2179-450d-babf-843d70721def\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gjxrx" podUID="8cabee2a-2179-450d-babf-843d70721def" Jan 24 00:00:21.402431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020615482.mount: Deactivated successfully. Jan 24 00:00:21.579064 containerd[1736]: time="2026-01-24T00:00:21.579010980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:00:21.581433 containerd[1736]: time="2026-01-24T00:00:21.581386901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 24 00:00:21.592328 containerd[1736]: time="2026-01-24T00:00:21.592261066Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:00:21.595959 containerd[1736]: time="2026-01-24T00:00:21.595909028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:00:21.597416 containerd[1736]: time="2026-01-24T00:00:21.596443068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 18.536944846s" Jan 24 00:00:21.597416 containerd[1736]: time="2026-01-24T00:00:21.596474868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 24 00:00:21.614231 containerd[1736]: time="2026-01-24T00:00:21.614191836Z" level=info msg="CreateContainer within sandbox \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:00:21.671761 containerd[1736]: time="2026-01-24T00:00:21.671631582Z" level=info msg="CreateContainer within sandbox \"f160b6bdda78b7d7f4ea1c6703c998016ef159f6601b55540cfeaded77e19185\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d772b3ff179502a4136fe4e1c264da5b04007165ad74fd4e1d41670bd15970c3\"" Jan 24 00:00:21.673862 containerd[1736]: time="2026-01-24T00:00:21.673428903Z" level=info msg="StartContainer for \"d772b3ff179502a4136fe4e1c264da5b04007165ad74fd4e1d41670bd15970c3\"" Jan 24 00:00:21.706534 systemd[1]: Started cri-containerd-d772b3ff179502a4136fe4e1c264da5b04007165ad74fd4e1d41670bd15970c3.scope - libcontainer container d772b3ff179502a4136fe4e1c264da5b04007165ad74fd4e1d41670bd15970c3. Jan 24 00:00:21.737291 containerd[1736]: time="2026-01-24T00:00:21.737182212Z" level=info msg="StartContainer for \"d772b3ff179502a4136fe4e1c264da5b04007165ad74fd4e1d41670bd15970c3\" returns successfully" Jan 24 00:00:22.067072 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:00:22.067254 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:00:22.206412 kubelet[3217]: I0124 00:00:22.205382 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l82sn" podStartSLOduration=1.467724311 podStartE2EDuration="30.205363103s" podCreationTimestamp="2026-01-23 23:59:52 +0000 UTC" firstStartedPulling="2026-01-23 23:59:52.860876357 +0000 UTC m=+27.020294483" lastFinishedPulling="2026-01-24 00:00:21.598515189 +0000 UTC m=+55.757933275" observedRunningTime="2026-01-24 00:00:22.192588898 +0000 UTC m=+56.352007064" watchObservedRunningTime="2026-01-24 00:00:22.205363103 +0000 UTC m=+56.364781189" Jan 24 00:00:22.213011 containerd[1736]: time="2026-01-24T00:00:22.212973147Z" level=info msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.327 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.328 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" iface="eth0" netns="/var/run/netns/cni-babf75c8-c590-30b0-cb0c-e71668accec7" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.329 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" iface="eth0" netns="/var/run/netns/cni-babf75c8-c590-30b0-cb0c-e71668accec7" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.332 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" iface="eth0" netns="/var/run/netns/cni-babf75c8-c590-30b0-cb0c-e71668accec7" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.333 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.333 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.367 [INFO][4558] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.367 [INFO][4558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.367 [INFO][4558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.376 [WARNING][4558] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.376 [INFO][4558] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.377 [INFO][4558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:22.382197 containerd[1736]: 2026-01-24 00:00:22.380 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:22.383625 containerd[1736]: time="2026-01-24T00:00:22.382576587Z" level=info msg="TearDown network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" successfully" Jan 24 00:00:22.383625 containerd[1736]: time="2026-01-24T00:00:22.382618147Z" level=info msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" returns successfully" Jan 24 00:00:22.401175 systemd[1]: run-netns-cni\x2dbabf75c8\x2dc590\x2d30b0\x2dcb0c\x2de71668accec7.mount: Deactivated successfully. Jan 24 00:00:22.471516 kubelet[3217]: I0124 00:00:22.471271 3217 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-ca-bundle\") pod \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" (UID: \"92b832a2-d5a5-4983-bbcd-e800e9df0595\") " Jan 24 00:00:22.471516 kubelet[3217]: I0124 00:00:22.471332 3217 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-backend-key-pair\") pod \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" (UID: \"92b832a2-d5a5-4983-bbcd-e800e9df0595\") " Jan 24 00:00:22.471516 kubelet[3217]: I0124 00:00:22.471355 3217 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd7v9\" (UniqueName: \"kubernetes.io/projected/92b832a2-d5a5-4983-bbcd-e800e9df0595-kube-api-access-bd7v9\") pod \"92b832a2-d5a5-4983-bbcd-e800e9df0595\" (UID: \"92b832a2-d5a5-4983-bbcd-e800e9df0595\") " Jan 24 00:00:22.472545 kubelet[3217]: I0124 00:00:22.472256 3217 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "92b832a2-d5a5-4983-bbcd-e800e9df0595" (UID: "92b832a2-d5a5-4983-bbcd-e800e9df0595"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:00:22.477450 kubelet[3217]: I0124 00:00:22.476692 3217 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b832a2-d5a5-4983-bbcd-e800e9df0595-kube-api-access-bd7v9" (OuterVolumeSpecName: "kube-api-access-bd7v9") pod "92b832a2-d5a5-4983-bbcd-e800e9df0595" (UID: "92b832a2-d5a5-4983-bbcd-e800e9df0595"). InnerVolumeSpecName "kube-api-access-bd7v9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:00:22.477633 kubelet[3217]: I0124 00:00:22.477606 3217 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "92b832a2-d5a5-4983-bbcd-e800e9df0595" (UID: "92b832a2-d5a5-4983-bbcd-e800e9df0595"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:00:22.477892 systemd[1]: var-lib-kubelet-pods-92b832a2\x2dd5a5\x2d4983\x2dbbcd\x2de800e9df0595-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbd7v9.mount: Deactivated successfully. Jan 24 00:00:22.482940 systemd[1]: var-lib-kubelet-pods-92b832a2\x2dd5a5\x2d4983\x2dbbcd\x2de800e9df0595-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:00:22.572334 kubelet[3217]: I0124 00:00:22.572289 3217 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-2a642b76b3\" DevicePath \"\"" Jan 24 00:00:22.572334 kubelet[3217]: I0124 00:00:22.572325 3217 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bd7v9\" (UniqueName: \"kubernetes.io/projected/92b832a2-d5a5-4983-bbcd-e800e9df0595-kube-api-access-bd7v9\") on node \"ci-4081.3.6-n-2a642b76b3\" DevicePath \"\"" Jan 24 00:00:22.572334 kubelet[3217]: I0124 00:00:22.572336 3217 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92b832a2-d5a5-4983-bbcd-e800e9df0595-whisker-ca-bundle\") on node \"ci-4081.3.6-n-2a642b76b3\" DevicePath \"\"" Jan 24 00:00:23.166369 systemd[1]: Removed slice kubepods-besteffort-pod92b832a2_d5a5_4983_bbcd_e800e9df0595.slice - libcontainer container kubepods-besteffort-pod92b832a2_d5a5_4983_bbcd_e800e9df0595.slice. Jan 24 00:00:23.289025 systemd[1]: Created slice kubepods-besteffort-podf462240a_0a7b_4fa9_a623_1df80e2e9a5c.slice - libcontainer container kubepods-besteffort-podf462240a_0a7b_4fa9_a623_1df80e2e9a5c.slice. Jan 24 00:00:23.377810 kubelet[3217]: I0124 00:00:23.377682 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f462240a-0a7b-4fa9-a623-1df80e2e9a5c-whisker-ca-bundle\") pod \"whisker-7dfdc764f5-mkdn7\" (UID: \"f462240a-0a7b-4fa9-a623-1df80e2e9a5c\") " pod="calico-system/whisker-7dfdc764f5-mkdn7" Jan 24 00:00:23.377810 kubelet[3217]: I0124 00:00:23.377731 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hj8n\" (UniqueName: \"kubernetes.io/projected/f462240a-0a7b-4fa9-a623-1df80e2e9a5c-kube-api-access-2hj8n\") pod \"whisker-7dfdc764f5-mkdn7\" (UID: \"f462240a-0a7b-4fa9-a623-1df80e2e9a5c\") " pod="calico-system/whisker-7dfdc764f5-mkdn7" Jan 24 00:00:23.377810 kubelet[3217]: I0124 00:00:23.377756 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f462240a-0a7b-4fa9-a623-1df80e2e9a5c-whisker-backend-key-pair\") pod \"whisker-7dfdc764f5-mkdn7\" (UID: \"f462240a-0a7b-4fa9-a623-1df80e2e9a5c\") " pod="calico-system/whisker-7dfdc764f5-mkdn7" Jan 24 00:00:23.596425 containerd[1736]: time="2026-01-24T00:00:23.594725342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfdc764f5-mkdn7,Uid:f462240a-0a7b-4fa9-a623-1df80e2e9a5c,Namespace:calico-system,Attempt:0,}" Jan 24 00:00:23.924223 kubelet[3217]: I0124 00:00:23.924096 3217 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b832a2-d5a5-4983-bbcd-e800e9df0595" path="/var/lib/kubelet/pods/92b832a2-d5a5-4983-bbcd-e800e9df0595/volumes" Jan 24 00:00:23.950460 systemd-networkd[1363]: cali215e343d5a5: Link UP Jan 24 00:00:23.950999 systemd-networkd[1363]: cali215e343d5a5: Gained carrier Jan 24 00:00:23.952490 kernel: bpftool[4742]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.665 [INFO][4691] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.720 [INFO][4691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0 whisker-7dfdc764f5- calico-system f462240a-0a7b-4fa9-a623-1df80e2e9a5c 978 0 2026-01-24 00:00:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dfdc764f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 whisker-7dfdc764f5-mkdn7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali215e343d5a5 [] [] }} ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.720 [INFO][4691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.762 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" HandleID="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.764 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" HandleID="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"whisker-7dfdc764f5-mkdn7", "timestamp":"2026-01-24 00:00:23.762715064 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.764 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.764 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.764 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.780 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.788 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.794 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.796 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.798 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.798 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.799 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.805 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.819 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.129/26] block=192.168.5.128/26 handle="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.819 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.129/26] handle="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.819 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:23.969484 containerd[1736]: 2026-01-24 00:00:23.819 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.129/26] IPv6=[] ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" HandleID="k8s-pod-network.c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:23.970054 containerd[1736]: 2026-01-24 00:00:23.822 [INFO][4691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0", GenerateName:"whisker-7dfdc764f5-", Namespace:"calico-system", SelfLink:"", UID:"f462240a-0a7b-4fa9-a623-1df80e2e9a5c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dfdc764f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"whisker-7dfdc764f5-mkdn7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali215e343d5a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:23.970054 containerd[1736]: 2026-01-24 00:00:23.822 [INFO][4691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.129/32] ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:23.970054 containerd[1736]: 2026-01-24 00:00:23.822 [INFO][4691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali215e343d5a5 ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:23.970054 containerd[1736]: 2026-01-24 00:00:23.950 [INFO][4691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:23.970054 containerd[1736]: 2026-01-24 00:00:23.950 [INFO][4691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0", GenerateName:"whisker-7dfdc764f5-", Namespace:"calico-system", SelfLink:"", UID:"f462240a-0a7b-4fa9-a623-1df80e2e9a5c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dfdc764f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a", Pod:"whisker-7dfdc764f5-mkdn7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali215e343d5a5", MAC:"2e:89:8e:ba:9e:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:23.970054 containerd[1736]: 2026-01-24 00:00:23.965 [INFO][4691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a" Namespace="calico-system" Pod="whisker-7dfdc764f5-mkdn7" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--7dfdc764f5--mkdn7-eth0" Jan 24 00:00:24.239644 containerd[1736]: time="2026-01-24T00:00:24.238880698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:24.239644 containerd[1736]: time="2026-01-24T00:00:24.238934618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:24.239644 containerd[1736]: time="2026-01-24T00:00:24.238963778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:24.239644 containerd[1736]: time="2026-01-24T00:00:24.239040498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:24.263539 systemd[1]: Started cri-containerd-c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a.scope - libcontainer container c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a. Jan 24 00:00:24.295022 containerd[1736]: time="2026-01-24T00:00:24.294982085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dfdc764f5-mkdn7,Uid:f462240a-0a7b-4fa9-a623-1df80e2e9a5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c17806337f4067ca40c0555b35714814596ad37a7ae234feb885a6a54028967a\"" Jan 24 00:00:24.297551 containerd[1736]: time="2026-01-24T00:00:24.297505487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:00:24.397505 systemd-networkd[1363]: vxlan.calico: Link UP Jan 24 00:00:24.397894 systemd-networkd[1363]: vxlan.calico: Gained carrier Jan 24 00:00:25.545589 systemd-networkd[1363]: cali215e343d5a5: Gained IPv6LL Jan 24 00:00:25.924339 containerd[1736]: time="2026-01-24T00:00:25.924052844Z" level=info msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.955 [WARNING][4876] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.955 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.955 [INFO][4876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" iface="eth0" netns="" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.955 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.955 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.973 [INFO][4883] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.973 [INFO][4883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.973 [INFO][4883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.981 [WARNING][4883] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.982 [INFO][4883] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.983 [INFO][4883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:25.986812 containerd[1736]: 2026-01-24 00:00:25.985 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:25.986812 containerd[1736]: time="2026-01-24T00:00:25.986599595Z" level=info msg="TearDown network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" successfully" Jan 24 00:00:25.986812 containerd[1736]: time="2026-01-24T00:00:25.986622995Z" level=info msg="StopPodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" returns successfully" Jan 24 00:00:25.987789 containerd[1736]: time="2026-01-24T00:00:25.987475276Z" level=info msg="RemovePodSandbox for \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" Jan 24 00:00:25.987789 containerd[1736]: time="2026-01-24T00:00:25.987508516Z" level=info msg="Forcibly stopping sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\"" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.032 [WARNING][4897] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.032 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.033 [INFO][4897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" iface="eth0" netns="" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.033 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.033 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.049 [INFO][4904] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.049 [INFO][4904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.049 [INFO][4904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.058 [WARNING][4904] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.058 [INFO][4904] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" HandleID="k8s-pod-network.ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Workload="ci--4081.3.6--n--2a642b76b3-k8s-whisker--77487d64ff--bmshx-eth0" Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.059 [INFO][4904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:26.063177 containerd[1736]: 2026-01-24 00:00:26.061 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d" Jan 24 00:00:26.064057 containerd[1736]: time="2026-01-24T00:00:26.063650193Z" level=info msg="TearDown network for sandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" successfully" Jan 24 00:00:26.075159 containerd[1736]: time="2026-01-24T00:00:26.074991878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:00:26.075159 containerd[1736]: time="2026-01-24T00:00:26.075054399Z" level=info msg="RemovePodSandbox \"ae812120368483ed60273e9d74c57192acfd3795b6693c39bb6f5bc85083f38d\" returns successfully" Jan 24 00:00:26.441612 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Jan 24 00:00:26.922082 containerd[1736]: time="2026-01-24T00:00:26.921806854Z" level=info msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.968 [INFO][4919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.968 [INFO][4919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" iface="eth0" netns="/var/run/netns/cni-928b4d74-fd5d-0438-75d5-f8bdddf8d2b7" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.969 [INFO][4919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" iface="eth0" netns="/var/run/netns/cni-928b4d74-fd5d-0438-75d5-f8bdddf8d2b7" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.969 [INFO][4919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" iface="eth0" netns="/var/run/netns/cni-928b4d74-fd5d-0438-75d5-f8bdddf8d2b7" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.969 [INFO][4919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.969 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.988 [INFO][4926] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.989 [INFO][4926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.989 [INFO][4926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.997 [WARNING][4926] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.997 [INFO][4926] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:26.999 [INFO][4926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:27.004914 containerd[1736]: 2026-01-24 00:00:27.002 [INFO][4919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:00:27.008787 containerd[1736]: time="2026-01-24T00:00:27.006512615Z" level=info msg="TearDown network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" successfully" Jan 24 00:00:27.008787 containerd[1736]: time="2026-01-24T00:00:27.006549735Z" level=info msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" returns successfully" Jan 24 00:00:27.008341 systemd[1]: run-netns-cni\x2d928b4d74\x2dfd5d\x2d0438\x2d75d5\x2df8bdddf8d2b7.mount: Deactivated successfully. Jan 24 00:00:27.009118 containerd[1736]: time="2026-01-24T00:00:27.009049137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-qgr74,Uid:37f635e6-9d73-41e3-ac25-e030d9b2101d,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:00:27.159234 systemd-networkd[1363]: cali0cc3b1e5ef9: Link UP Jan 24 00:00:27.159666 systemd-networkd[1363]: cali0cc3b1e5ef9: Gained carrier Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.086 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0 calico-apiserver-5dd9d484d4- calico-apiserver 37f635e6-9d73-41e3-ac25-e030d9b2101d 993 0 2026-01-23 23:59:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd9d484d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 calico-apiserver-5dd9d484d4-qgr74 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0cc3b1e5ef9 [] [] }} ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.086 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.112 [INFO][4944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" HandleID="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.112 [INFO][4944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" HandleID="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"calico-apiserver-5dd9d484d4-qgr74", "timestamp":"2026-01-24 00:00:27.112711267 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.112 [INFO][4944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.113 [INFO][4944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.113 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.122 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.127 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.132 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.134 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.136 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.136 [INFO][4944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.137 [INFO][4944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88 Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.146 [INFO][4944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.153 [INFO][4944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.130/26] block=192.168.5.128/26 handle="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.153 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.130/26] handle="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.153 [INFO][4944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:27.188502 containerd[1736]: 2026-01-24 00:00:27.153 [INFO][4944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.130/26] IPv6=[] ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" HandleID="k8s-pod-network.00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.189007 containerd[1736]: 2026-01-24 00:00:27.156 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37f635e6-9d73-41e3-ac25-e030d9b2101d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"calico-apiserver-5dd9d484d4-qgr74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cc3b1e5ef9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:27.189007 containerd[1736]: 2026-01-24 00:00:27.156 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.130/32] ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.189007 containerd[1736]: 2026-01-24 00:00:27.156 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cc3b1e5ef9 ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.189007 containerd[1736]: 2026-01-24 00:00:27.160 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.189007 containerd[1736]: 2026-01-24 00:00:27.162 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37f635e6-9d73-41e3-ac25-e030d9b2101d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88", Pod:"calico-apiserver-5dd9d484d4-qgr74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cc3b1e5ef9", MAC:"7a:77:5d:76:c1:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:27.189007 containerd[1736]: 2026-01-24 00:00:27.185 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-qgr74" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:00:27.210599 containerd[1736]: time="2026-01-24T00:00:27.210484115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:27.210599 containerd[1736]: time="2026-01-24T00:00:27.210541315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:27.210599 containerd[1736]: time="2026-01-24T00:00:27.210566315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:27.210858 containerd[1736]: time="2026-01-24T00:00:27.210716836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:27.230552 systemd[1]: Started cri-containerd-00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88.scope - libcontainer container 00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88. Jan 24 00:00:27.261307 containerd[1736]: time="2026-01-24T00:00:27.260592060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-qgr74,Uid:37f635e6-9d73-41e3-ac25-e030d9b2101d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88\"" Jan 24 00:00:27.484407 containerd[1736]: time="2026-01-24T00:00:27.483943050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:27.487018 containerd[1736]: time="2026-01-24T00:00:27.486885051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:00:27.487018 containerd[1736]: time="2026-01-24T00:00:27.486989531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:00:27.488995 kubelet[3217]: E0124 00:00:27.488731 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:00:27.488995 kubelet[3217]: E0124 00:00:27.488781 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:00:27.490372 containerd[1736]: time="2026-01-24T00:00:27.489527532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:27.497785 kubelet[3217]: E0124 00:00:27.497591 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:838d2d7f116c4e34b727b27e353cd551,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:27.922939 containerd[1736]: time="2026-01-24T00:00:27.922630145Z" level=info msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.970 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.970 [INFO][5009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" iface="eth0" netns="/var/run/netns/cni-90e8e6b5-b32c-a7b2-8089-13f61e556d7e" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.970 [INFO][5009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" iface="eth0" netns="/var/run/netns/cni-90e8e6b5-b32c-a7b2-8089-13f61e556d7e" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.971 [INFO][5009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" iface="eth0" netns="/var/run/netns/cni-90e8e6b5-b32c-a7b2-8089-13f61e556d7e" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.971 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.971 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.987 [INFO][5016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.987 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.987 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.998 [WARNING][5016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.998 [INFO][5016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:27.999 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:28.005962 containerd[1736]: 2026-01-24 00:00:28.004 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:00:28.009524 containerd[1736]: time="2026-01-24T00:00:28.006084466Z" level=info msg="TearDown network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" successfully" Jan 24 00:00:28.009524 containerd[1736]: time="2026-01-24T00:00:28.006110826Z" level=info msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" returns successfully" Jan 24 00:00:28.009164 systemd[1]: run-netns-cni\x2d90e8e6b5\x2db32c\x2da7b2\x2d8089\x2d13f61e556d7e.mount: Deactivated successfully. Jan 24 00:00:28.011283 containerd[1736]: time="2026-01-24T00:00:28.010285468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjd2b,Uid:2024f333-ad36-464d-817d-816658048dd9,Namespace:kube-system,Attempt:1,}" Jan 24 00:00:28.170637 systemd-networkd[1363]: cali05bbb31b6ab: Link UP Jan 24 00:00:28.172899 systemd-networkd[1363]: cali05bbb31b6ab: Gained carrier Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.095 [INFO][5024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0 coredns-674b8bbfcf- kube-system 2024f333-ad36-464d-817d-816658048dd9 1006 0 2026-01-23 23:59:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 coredns-674b8bbfcf-bjd2b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali05bbb31b6ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.096 [INFO][5024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.121 [INFO][5036] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" HandleID="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.122 [INFO][5036] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" HandleID="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"coredns-674b8bbfcf-bjd2b", "timestamp":"2026-01-24 00:00:28.121879282 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.122 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.122 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.122 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.131 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.137 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.141 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.143 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.145 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.145 [INFO][5036] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.147 [INFO][5036] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.155 [INFO][5036] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.162 [INFO][5036] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.131/26] block=192.168.5.128/26 handle="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.162 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.131/26] handle="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.162 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:28.194772 containerd[1736]: 2026-01-24 00:00:28.163 [INFO][5036] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.131/26] IPv6=[] ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" HandleID="k8s-pod-network.d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.195740 containerd[1736]: 2026-01-24 00:00:28.167 [INFO][5024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2024f333-ad36-464d-817d-816658048dd9", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"coredns-674b8bbfcf-bjd2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05bbb31b6ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:28.195740 containerd[1736]: 2026-01-24 00:00:28.167 [INFO][5024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.131/32] ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.195740 containerd[1736]: 2026-01-24 00:00:28.167 [INFO][5024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05bbb31b6ab ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.195740 containerd[1736]: 2026-01-24 00:00:28.172 [INFO][5024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.195740 containerd[1736]: 2026-01-24 00:00:28.172 [INFO][5024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2024f333-ad36-464d-817d-816658048dd9", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c", Pod:"coredns-674b8bbfcf-bjd2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05bbb31b6ab", MAC:"2a:4b:13:dd:79:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:28.195740 containerd[1736]: 2026-01-24 00:00:28.190 [INFO][5024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c" Namespace="kube-system" Pod="coredns-674b8bbfcf-bjd2b" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:00:28.216453 containerd[1736]: time="2026-01-24T00:00:28.216217929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:28.216947 containerd[1736]: time="2026-01-24T00:00:28.216775769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:28.216947 containerd[1736]: time="2026-01-24T00:00:28.216800489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:28.216947 containerd[1736]: time="2026-01-24T00:00:28.216891529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:28.243313 systemd[1]: Started cri-containerd-d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c.scope - libcontainer container d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c. Jan 24 00:00:28.286695 containerd[1736]: time="2026-01-24T00:00:28.286622683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bjd2b,Uid:2024f333-ad36-464d-817d-816658048dd9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c\"" Jan 24 00:00:28.299281 containerd[1736]: time="2026-01-24T00:00:28.299240929Z" level=info msg="CreateContainer within sandbox \"d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:00:28.332203 containerd[1736]: time="2026-01-24T00:00:28.332132666Z" level=info msg="CreateContainer within sandbox \"d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5ce4f12107fb688e030bd3184076c66606b2ea3bc758fdfac0de99d6865fd63\"" Jan 24 00:00:28.336880 containerd[1736]: time="2026-01-24T00:00:28.336660428Z" level=info msg="StartContainer for \"a5ce4f12107fb688e030bd3184076c66606b2ea3bc758fdfac0de99d6865fd63\"" Jan 24 00:00:28.356524 systemd[1]: Started cri-containerd-a5ce4f12107fb688e030bd3184076c66606b2ea3bc758fdfac0de99d6865fd63.scope - libcontainer container a5ce4f12107fb688e030bd3184076c66606b2ea3bc758fdfac0de99d6865fd63. Jan 24 00:00:28.385527 containerd[1736]: time="2026-01-24T00:00:28.385489252Z" level=info msg="StartContainer for \"a5ce4f12107fb688e030bd3184076c66606b2ea3bc758fdfac0de99d6865fd63\" returns successfully" Jan 24 00:00:28.810317 systemd-networkd[1363]: cali0cc3b1e5ef9: Gained IPv6LL Jan 24 00:00:28.922964 containerd[1736]: time="2026-01-24T00:00:28.921881275Z" level=info msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" Jan 24 00:00:28.922964 containerd[1736]: time="2026-01-24T00:00:28.922148995Z" level=info msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:28.981 [INFO][5144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:28.982 [INFO][5144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" iface="eth0" netns="/var/run/netns/cni-497b8508-fd30-e00a-129f-a5ce787707cb" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:28.983 [INFO][5144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" iface="eth0" netns="/var/run/netns/cni-497b8508-fd30-e00a-129f-a5ce787707cb" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:28.983 [INFO][5144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" iface="eth0" netns="/var/run/netns/cni-497b8508-fd30-e00a-129f-a5ce787707cb" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:28.984 [INFO][5144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:28.984 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.026 [INFO][5157] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.027 [INFO][5157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.027 [INFO][5157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.036 [WARNING][5157] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.036 [INFO][5157] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.037 [INFO][5157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:29.043590 containerd[1736]: 2026-01-24 00:00:29.039 [INFO][5144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:00:29.046445 containerd[1736]: time="2026-01-24T00:00:29.044506135Z" level=info msg="TearDown network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" successfully" Jan 24 00:00:29.046445 containerd[1736]: time="2026-01-24T00:00:29.044547775Z" level=info msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" returns successfully" Jan 24 00:00:29.046139 systemd[1]: run-netns-cni\x2d497b8508\x2dfd30\x2de00a\x2d129f\x2da5ce787707cb.mount: Deactivated successfully. Jan 24 00:00:29.055793 containerd[1736]: time="2026-01-24T00:00:29.055533180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754bb44d48-hhlr2,Uid:a37d52d3-c228-4df6-b0fc-c5d23ff527d2,Namespace:calico-system,Attempt:1,}" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:28.987 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:28.987 [INFO][5145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" iface="eth0" netns="/var/run/netns/cni-f4a0ef4e-cd4d-9df7-a6f5-33b4c122c74a" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:28.989 [INFO][5145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" iface="eth0" netns="/var/run/netns/cni-f4a0ef4e-cd4d-9df7-a6f5-33b4c122c74a" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:28.989 [INFO][5145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" iface="eth0" netns="/var/run/netns/cni-f4a0ef4e-cd4d-9df7-a6f5-33b4c122c74a" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:28.989 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:28.989 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.028 [INFO][5159] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.028 [INFO][5159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.037 [INFO][5159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.055 [WARNING][5159] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.055 [INFO][5159] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.056 [INFO][5159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:29.060428 containerd[1736]: 2026-01-24 00:00:29.058 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:00:29.062679 containerd[1736]: time="2026-01-24T00:00:29.062593184Z" level=info msg="TearDown network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" successfully" Jan 24 00:00:29.062749 containerd[1736]: time="2026-01-24T00:00:29.062711504Z" level=info msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" returns successfully" Jan 24 00:00:29.063313 systemd[1]: run-netns-cni\x2df4a0ef4e\x2dcd4d\x2d9df7\x2da6f5\x2d33b4c122c74a.mount: Deactivated successfully. Jan 24 00:00:29.065759 containerd[1736]: time="2026-01-24T00:00:29.065720465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmrrm,Uid:1900a277-348f-4eb2-aa7c-7d2406a64ec8,Namespace:calico-system,Attempt:1,}" Jan 24 00:00:29.217051 kubelet[3217]: I0124 00:00:29.216893 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bjd2b" podStartSLOduration=58.21686034 podStartE2EDuration="58.21686034s" podCreationTimestamp="2026-01-23 23:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:00:29.216194699 +0000 UTC m=+63.375612785" watchObservedRunningTime="2026-01-24 00:00:29.21686034 +0000 UTC m=+63.376278466" Jan 24 00:00:29.300524 systemd-networkd[1363]: cali95a6cfe1d8f: Link UP Jan 24 00:00:29.301557 systemd-networkd[1363]: cali95a6cfe1d8f: Gained carrier Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.160 [INFO][5171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0 calico-kube-controllers-754bb44d48- calico-system a37d52d3-c228-4df6-b0fc-c5d23ff527d2 1018 0 2026-01-23 23:59:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:754bb44d48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 calico-kube-controllers-754bb44d48-hhlr2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95a6cfe1d8f [] [] }} ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.160 [INFO][5171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.202 [INFO][5195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" HandleID="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.202 [INFO][5195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" HandleID="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"calico-kube-controllers-754bb44d48-hhlr2", "timestamp":"2026-01-24 00:00:29.202000292 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.202 [INFO][5195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.202 [INFO][5195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.202 [INFO][5195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.214 [INFO][5195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.226 [INFO][5195] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.245 [INFO][5195] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.252 [INFO][5195] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.256 [INFO][5195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.256 [INFO][5195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.266 [INFO][5195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.276 [INFO][5195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.286 [INFO][5195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.132/26] block=192.168.5.128/26 handle="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.286 [INFO][5195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.132/26] handle="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.286 [INFO][5195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:29.320363 containerd[1736]: 2026-01-24 00:00:29.286 [INFO][5195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.132/26] IPv6=[] ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" HandleID="k8s-pod-network.d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.322847 containerd[1736]: 2026-01-24 00:00:29.288 [INFO][5171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0", GenerateName:"calico-kube-controllers-754bb44d48-", Namespace:"calico-system", SelfLink:"", UID:"a37d52d3-c228-4df6-b0fc-c5d23ff527d2", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754bb44d48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"calico-kube-controllers-754bb44d48-hhlr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95a6cfe1d8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:29.322847 containerd[1736]: 2026-01-24 00:00:29.288 [INFO][5171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.132/32] ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.322847 containerd[1736]: 2026-01-24 00:00:29.288 [INFO][5171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95a6cfe1d8f ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.322847 containerd[1736]: 2026-01-24 00:00:29.302 [INFO][5171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.322847 containerd[1736]: 2026-01-24 00:00:29.302 [INFO][5171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0", GenerateName:"calico-kube-controllers-754bb44d48-", Namespace:"calico-system", SelfLink:"", UID:"a37d52d3-c228-4df6-b0fc-c5d23ff527d2", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754bb44d48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b", Pod:"calico-kube-controllers-754bb44d48-hhlr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95a6cfe1d8f", MAC:"12:de:b0:7d:1a:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:29.322847 containerd[1736]: 2026-01-24 00:00:29.318 [INFO][5171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b" Namespace="calico-system" Pod="calico-kube-controllers-754bb44d48-hhlr2" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:00:29.343493 containerd[1736]: time="2026-01-24T00:00:29.343266082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:29.343493 containerd[1736]: time="2026-01-24T00:00:29.343315602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:29.343493 containerd[1736]: time="2026-01-24T00:00:29.343336762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:29.344030 containerd[1736]: time="2026-01-24T00:00:29.343893282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:29.363568 systemd[1]: Started cri-containerd-d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b.scope - libcontainer container d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b. Jan 24 00:00:29.389150 systemd-networkd[1363]: cali34346cb78eb: Link UP Jan 24 00:00:29.390282 systemd-networkd[1363]: cali34346cb78eb: Gained carrier Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.163 [INFO][5179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0 csi-node-driver- calico-system 1900a277-348f-4eb2-aa7c-7d2406a64ec8 1019 0 2026-01-23 23:59:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 csi-node-driver-mmrrm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali34346cb78eb [] [] }} ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.163 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.203 [INFO][5197] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" HandleID="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.204 [INFO][5197] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" HandleID="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"csi-node-driver-mmrrm", "timestamp":"2026-01-24 00:00:29.203096053 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.204 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.286 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.286 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.315 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.325 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.350 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.352 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.355 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.355 [INFO][5197] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.358 [INFO][5197] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.366 [INFO][5197] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.379 [INFO][5197] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.133/26] block=192.168.5.128/26 handle="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.379 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.133/26] handle="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.379 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:29.415138 containerd[1736]: 2026-01-24 00:00:29.379 [INFO][5197] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.133/26] IPv6=[] ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" HandleID="k8s-pod-network.842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.416486 containerd[1736]: 2026-01-24 00:00:29.382 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1900a277-348f-4eb2-aa7c-7d2406a64ec8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"csi-node-driver-mmrrm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34346cb78eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:29.416486 containerd[1736]: 2026-01-24 00:00:29.382 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.133/32] ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.416486 containerd[1736]: 2026-01-24 00:00:29.382 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34346cb78eb ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.416486 containerd[1736]: 2026-01-24 00:00:29.391 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.416486 containerd[1736]: 2026-01-24 00:00:29.391 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1900a277-348f-4eb2-aa7c-7d2406a64ec8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf", Pod:"csi-node-driver-mmrrm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34346cb78eb", MAC:"de:0b:d0:ca:e8:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:29.416486 containerd[1736]: 2026-01-24 00:00:29.410 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf" Namespace="calico-system" Pod="csi-node-driver-mmrrm" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:00:29.417726 containerd[1736]: time="2026-01-24T00:00:29.417682398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754bb44d48-hhlr2,Uid:a37d52d3-c228-4df6-b0fc-c5d23ff527d2,Namespace:calico-system,Attempt:1,} returns sandbox id \"d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b\"" Jan 24 00:00:29.440892 containerd[1736]: time="2026-01-24T00:00:29.440819329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:29.442066 containerd[1736]: time="2026-01-24T00:00:29.440867329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:29.442066 containerd[1736]: time="2026-01-24T00:00:29.440877769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:29.442066 containerd[1736]: time="2026-01-24T00:00:29.440945369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:29.458597 systemd[1]: Started cri-containerd-842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf.scope - libcontainer container 842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf. Jan 24 00:00:29.479877 containerd[1736]: time="2026-01-24T00:00:29.479775949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmrrm,Uid:1900a277-348f-4eb2-aa7c-7d2406a64ec8,Namespace:calico-system,Attempt:1,} returns sandbox id \"842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf\"" Jan 24 00:00:29.833601 systemd-networkd[1363]: cali05bbb31b6ab: Gained IPv6LL Jan 24 00:00:29.923565 containerd[1736]: time="2026-01-24T00:00:29.923090526Z" level=info msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.972 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.972 [INFO][5319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" iface="eth0" netns="/var/run/netns/cni-740c743b-937a-eb4a-6dc7-248c9014f01e" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.972 [INFO][5319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" iface="eth0" netns="/var/run/netns/cni-740c743b-937a-eb4a-6dc7-248c9014f01e" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.972 [INFO][5319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" iface="eth0" netns="/var/run/netns/cni-740c743b-937a-eb4a-6dc7-248c9014f01e" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.972 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.972 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.992 [INFO][5326] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.992 [INFO][5326] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:29.992 [INFO][5326] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:30.003 [WARNING][5326] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:30.003 [INFO][5326] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:30.005 [INFO][5326] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:30.015934 containerd[1736]: 2026-01-24 00:00:30.012 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:00:30.017040 containerd[1736]: time="2026-01-24T00:00:30.016994052Z" level=info msg="TearDown network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" successfully" Jan 24 00:00:30.017040 containerd[1736]: time="2026-01-24T00:00:30.017035372Z" level=info msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" returns successfully" Jan 24 00:00:30.017761 containerd[1736]: time="2026-01-24T00:00:30.017720892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-bprcl,Uid:237e41c6-ec2d-4a8d-bb7d-ca837318e8f7,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:00:30.021184 systemd[1]: run-netns-cni\x2d740c743b\x2d937a\x2deb4a\x2d6dc7\x2d248c9014f01e.mount: Deactivated successfully. Jan 24 00:00:30.161509 systemd-networkd[1363]: cali2ef39aeb938: Link UP Jan 24 00:00:30.161797 systemd-networkd[1363]: cali2ef39aeb938: Gained carrier Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.089 [INFO][5335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0 calico-apiserver-5dd9d484d4- calico-apiserver 237e41c6-ec2d-4a8d-bb7d-ca837318e8f7 1039 0 2026-01-23 23:59:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd9d484d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 calico-apiserver-5dd9d484d4-bprcl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ef39aeb938 [] [] }} ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.089 [INFO][5335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.111 [INFO][5350] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" HandleID="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.111 [INFO][5350] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" HandleID="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"calico-apiserver-5dd9d484d4-bprcl", "timestamp":"2026-01-24 00:00:30.111814939 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.111 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.112 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.112 [INFO][5350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.122 [INFO][5350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.126 [INFO][5350] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.131 [INFO][5350] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.132 [INFO][5350] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.135 [INFO][5350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.135 [INFO][5350] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.137 [INFO][5350] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.144 [INFO][5350] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.154 [INFO][5350] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.134/26] block=192.168.5.128/26 handle="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.154 [INFO][5350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.134/26] handle="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.154 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:30.191707 containerd[1736]: 2026-01-24 00:00:30.154 [INFO][5350] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.134/26] IPv6=[] ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" HandleID="k8s-pod-network.438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.192441 containerd[1736]: 2026-01-24 00:00:30.156 [INFO][5335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"calico-apiserver-5dd9d484d4-bprcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ef39aeb938", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:30.192441 containerd[1736]: 2026-01-24 00:00:30.156 [INFO][5335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.134/32] ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.192441 containerd[1736]: 2026-01-24 00:00:30.156 [INFO][5335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ef39aeb938 ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.192441 containerd[1736]: 2026-01-24 00:00:30.162 [INFO][5335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.192441 containerd[1736]: 2026-01-24 00:00:30.166 [INFO][5335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe", Pod:"calico-apiserver-5dd9d484d4-bprcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ef39aeb938", MAC:"4e:eb:11:53:4a:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:30.192441 containerd[1736]: 2026-01-24 00:00:30.185 [INFO][5335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe" Namespace="calico-apiserver" Pod="calico-apiserver-5dd9d484d4-bprcl" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:00:30.209869 containerd[1736]: time="2026-01-24T00:00:30.209777827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:30.209869 containerd[1736]: time="2026-01-24T00:00:30.209823947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:30.209869 containerd[1736]: time="2026-01-24T00:00:30.209839147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:30.210302 containerd[1736]: time="2026-01-24T00:00:30.209911107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:30.230562 systemd[1]: Started cri-containerd-438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe.scope - libcontainer container 438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe. Jan 24 00:00:30.257676 containerd[1736]: time="2026-01-24T00:00:30.257627170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd9d484d4-bprcl,Uid:237e41c6-ec2d-4a8d-bb7d-ca837318e8f7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe\"" Jan 24 00:00:30.363290 update_engine[1717]: I20260124 00:00:30.363127 1717 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:00:30.363290 update_engine[1717]: I20260124 00:00:30.363171 1717 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:00:30.364240 update_engine[1717]: I20260124 00:00:30.363705 1717 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.364972 1717 omaha_request_params.cc:62] Current group set to lts Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365057 1717 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365065 1717 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365080 1717 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365113 1717 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365160 1717 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365168 1717 omaha_request_action.cc:272] Request: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: Jan 24 00:00:30.366074 update_engine[1717]: I20260124 00:00:30.365173 1717 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:00:30.366633 locksmithd[1774]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:00:30.367555 update_engine[1717]: I20260124 00:00:30.367534 1717 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:00:30.367892 update_engine[1717]: I20260124 00:00:30.367867 1717 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:00:30.470464 update_engine[1717]: E20260124 00:00:30.470326 1717 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:00:30.470648 update_engine[1717]: I20260124 00:00:30.470628 1717 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:00:30.555917 containerd[1736]: time="2026-01-24T00:00:30.555865837Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:30.559351 containerd[1736]: time="2026-01-24T00:00:30.559308399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:30.559555 containerd[1736]: time="2026-01-24T00:00:30.559526399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:30.560830 kubelet[3217]: E0124 00:00:30.560785 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:30.561179 kubelet[3217]: E0124 00:00:30.560863 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:30.561970 containerd[1736]: time="2026-01-24T00:00:30.561710360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:00:30.569244 kubelet[3217]: E0124 00:00:30.569170 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-qgr74_calico-apiserver(37f635e6-9d73-41e3-ac25-e030d9b2101d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:30.570653 kubelet[3217]: E0124 00:00:30.570268 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:00:30.922315 containerd[1736]: time="2026-01-24T00:00:30.922073058Z" level=info msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.967 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.967 [INFO][5415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" iface="eth0" netns="/var/run/netns/cni-1a442f5a-2b5e-2a27-5ec1-0dda492b1dea" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.967 [INFO][5415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" iface="eth0" netns="/var/run/netns/cni-1a442f5a-2b5e-2a27-5ec1-0dda492b1dea" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.969 [INFO][5415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" iface="eth0" netns="/var/run/netns/cni-1a442f5a-2b5e-2a27-5ec1-0dda492b1dea" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.969 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.969 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.989 [INFO][5422] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.989 [INFO][5422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.989 [INFO][5422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.998 [WARNING][5422] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:30.998 [INFO][5422] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:31.000 [INFO][5422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:31.005570 containerd[1736]: 2026-01-24 00:00:31.003 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:00:31.006057 containerd[1736]: time="2026-01-24T00:00:31.005707499Z" level=info msg="TearDown network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" successfully" Jan 24 00:00:31.006057 containerd[1736]: time="2026-01-24T00:00:31.005731659Z" level=info msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" returns successfully" Jan 24 00:00:31.007072 containerd[1736]: time="2026-01-24T00:00:31.006717900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65kmp,Uid:9390c20d-0be8-4dfe-954e-634e25852cb2,Namespace:calico-system,Attempt:1,}" Jan 24 00:00:31.009661 systemd[1]: run-netns-cni\x2d1a442f5a\x2d2b5e\x2d2a27\x2d5ec1\x2d0dda492b1dea.mount: Deactivated successfully. Jan 24 00:00:31.050044 systemd-networkd[1363]: cali95a6cfe1d8f: Gained IPv6LL Jan 24 00:00:31.164802 systemd-networkd[1363]: cali2d9e12122c1: Link UP Jan 24 00:00:31.165654 systemd-networkd[1363]: cali2d9e12122c1: Gained carrier Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.088 [INFO][5428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0 goldmane-666569f655- calico-system 9390c20d-0be8-4dfe-954e-634e25852cb2 1048 0 2026-01-23 23:59:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 goldmane-666569f655-65kmp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2d9e12122c1 [] [] }} ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.089 [INFO][5428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.113 [INFO][5440] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" HandleID="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.113 [INFO][5440] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" HandleID="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"goldmane-666569f655-65kmp", "timestamp":"2026-01-24 00:00:31.113040232 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.113 [INFO][5440] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.113 [INFO][5440] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.113 [INFO][5440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.123 [INFO][5440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.127 [INFO][5440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.132 [INFO][5440] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.133 [INFO][5440] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.136 [INFO][5440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.136 [INFO][5440] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.137 [INFO][5440] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.145 [INFO][5440] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.158 [INFO][5440] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.135/26] block=192.168.5.128/26 handle="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.158 [INFO][5440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.135/26] handle="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.158 [INFO][5440] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:31.190412 containerd[1736]: 2026-01-24 00:00:31.158 [INFO][5440] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.135/26] IPv6=[] ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" HandleID="k8s-pod-network.b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.190983 containerd[1736]: 2026-01-24 00:00:31.160 [INFO][5428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9390c20d-0be8-4dfe-954e-634e25852cb2", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"goldmane-666569f655-65kmp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d9e12122c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:31.190983 containerd[1736]: 2026-01-24 00:00:31.161 [INFO][5428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.135/32] ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.190983 containerd[1736]: 2026-01-24 00:00:31.161 [INFO][5428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d9e12122c1 ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.190983 containerd[1736]: 2026-01-24 00:00:31.166 [INFO][5428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.190983 containerd[1736]: 2026-01-24 00:00:31.167 [INFO][5428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9390c20d-0be8-4dfe-954e-634e25852cb2", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e", Pod:"goldmane-666569f655-65kmp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d9e12122c1", MAC:"5a:fd:61:7e:1a:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:31.190983 containerd[1736]: 2026-01-24 00:00:31.187 [INFO][5428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e" Namespace="calico-system" Pod="goldmane-666569f655-65kmp" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:00:31.201969 kubelet[3217]: E0124 00:00:31.201731 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:00:31.227768 containerd[1736]: time="2026-01-24T00:00:31.227606049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:31.228225 containerd[1736]: time="2026-01-24T00:00:31.227811129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:31.228225 containerd[1736]: time="2026-01-24T00:00:31.227937569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:31.228225 containerd[1736]: time="2026-01-24T00:00:31.228043249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:31.241581 systemd-networkd[1363]: cali34346cb78eb: Gained IPv6LL Jan 24 00:00:31.259560 systemd[1]: Started cri-containerd-b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e.scope - libcontainer container b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e. Jan 24 00:00:31.297872 containerd[1736]: time="2026-01-24T00:00:31.297831124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-65kmp,Uid:9390c20d-0be8-4dfe-954e-634e25852cb2,Namespace:calico-system,Attempt:1,} returns sandbox id \"b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e\"" Jan 24 00:00:32.203033 systemd-networkd[1363]: cali2ef39aeb938: Gained IPv6LL Jan 24 00:00:32.926336 containerd[1736]: time="2026-01-24T00:00:32.925981688Z" level=info msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.975 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.976 [INFO][5512] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" iface="eth0" netns="/var/run/netns/cni-e1bc0f29-c296-398e-14e2-d32cf916e48d" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.976 [INFO][5512] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" iface="eth0" netns="/var/run/netns/cni-e1bc0f29-c296-398e-14e2-d32cf916e48d" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.976 [INFO][5512] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" iface="eth0" netns="/var/run/netns/cni-e1bc0f29-c296-398e-14e2-d32cf916e48d" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.976 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.976 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.996 [INFO][5521] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.996 [INFO][5521] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:32.996 [INFO][5521] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:33.005 [WARNING][5521] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:33.005 [INFO][5521] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:33.006 [INFO][5521] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:33.010651 containerd[1736]: 2026-01-24 00:00:33.008 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:00:33.014663 containerd[1736]: time="2026-01-24T00:00:33.014479451Z" level=info msg="TearDown network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" successfully" Jan 24 00:00:33.014663 containerd[1736]: time="2026-01-24T00:00:33.014524811Z" level=info msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" returns successfully" Jan 24 00:00:33.015532 systemd[1]: run-netns-cni\x2de1bc0f29\x2dc296\x2d398e\x2d14e2\x2dd32cf916e48d.mount: Deactivated successfully. Jan 24 00:00:33.016210 containerd[1736]: time="2026-01-24T00:00:33.016058292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gjxrx,Uid:8cabee2a-2179-450d-babf-843d70721def,Namespace:kube-system,Attempt:1,}" Jan 24 00:00:33.033628 systemd-networkd[1363]: cali2d9e12122c1: Gained IPv6LL Jan 24 00:00:33.164189 systemd-networkd[1363]: calif9eb2a285cc: Link UP Jan 24 00:00:33.165508 systemd-networkd[1363]: calif9eb2a285cc: Gained carrier Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.096 [INFO][5528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0 coredns-674b8bbfcf- kube-system 8cabee2a-2179-450d-babf-843d70721def 1062 0 2026-01-23 23:59:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2a642b76b3 coredns-674b8bbfcf-gjxrx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9eb2a285cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.096 [INFO][5528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.117 [INFO][5540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" HandleID="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.118 [INFO][5540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" HandleID="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa240), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2a642b76b3", "pod":"coredns-674b8bbfcf-gjxrx", "timestamp":"2026-01-24 00:00:33.117909502 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2a642b76b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.118 [INFO][5540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.118 [INFO][5540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.118 [INFO][5540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2a642b76b3' Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.127 [INFO][5540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.132 [INFO][5540] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.136 [INFO][5540] ipam/ipam.go 511: Trying affinity for 192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.138 [INFO][5540] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.140 [INFO][5540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.140 [INFO][5540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.141 [INFO][5540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1 Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.146 [INFO][5540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.155 [INFO][5540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.136/26] block=192.168.5.128/26 handle="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.156 [INFO][5540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.136/26] handle="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" host="ci-4081.3.6-n-2a642b76b3" Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.156 [INFO][5540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:00:33.184497 containerd[1736]: 2026-01-24 00:00:33.156 [INFO][5540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.136/26] IPv6=[] ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" HandleID="k8s-pod-network.5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.185010 containerd[1736]: 2026-01-24 00:00:33.159 [INFO][5528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cabee2a-2179-450d-babf-843d70721def", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"", Pod:"coredns-674b8bbfcf-gjxrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9eb2a285cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:33.185010 containerd[1736]: 2026-01-24 00:00:33.160 [INFO][5528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.136/32] ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.185010 containerd[1736]: 2026-01-24 00:00:33.160 [INFO][5528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9eb2a285cc ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.185010 containerd[1736]: 2026-01-24 00:00:33.165 [INFO][5528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.185010 containerd[1736]: 2026-01-24 00:00:33.165 [INFO][5528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cabee2a-2179-450d-babf-843d70721def", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1", Pod:"coredns-674b8bbfcf-gjxrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9eb2a285cc", MAC:"ce:6c:f0:96:c9:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:00:33.185010 containerd[1736]: 2026-01-24 00:00:33.180 [INFO][5528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gjxrx" WorkloadEndpoint="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:00:33.206035 containerd[1736]: time="2026-01-24T00:00:33.205970426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:00:33.206166 containerd[1736]: time="2026-01-24T00:00:33.206050386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:00:33.206166 containerd[1736]: time="2026-01-24T00:00:33.206075546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:33.207147 containerd[1736]: time="2026-01-24T00:00:33.206189746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:00:33.238560 systemd[1]: Started cri-containerd-5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1.scope - libcontainer container 5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1. Jan 24 00:00:33.278728 containerd[1736]: time="2026-01-24T00:00:33.278664862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gjxrx,Uid:8cabee2a-2179-450d-babf-843d70721def,Namespace:kube-system,Attempt:1,} returns sandbox id \"5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1\"" Jan 24 00:00:33.293572 containerd[1736]: time="2026-01-24T00:00:33.293525069Z" level=info msg="CreateContainer within sandbox \"5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:00:33.327670 containerd[1736]: time="2026-01-24T00:00:33.327202806Z" level=info msg="CreateContainer within sandbox \"5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75e5c642f993151fa901184e3ca91b384f151ca5adc92f35d0f8014556150291\"" Jan 24 00:00:33.329660 containerd[1736]: time="2026-01-24T00:00:33.329628767Z" level=info msg="StartContainer for \"75e5c642f993151fa901184e3ca91b384f151ca5adc92f35d0f8014556150291\"" Jan 24 00:00:33.365615 systemd[1]: Started cri-containerd-75e5c642f993151fa901184e3ca91b384f151ca5adc92f35d0f8014556150291.scope - libcontainer container 75e5c642f993151fa901184e3ca91b384f151ca5adc92f35d0f8014556150291. Jan 24 00:00:33.398955 containerd[1736]: time="2026-01-24T00:00:33.398912441Z" level=info msg="StartContainer for \"75e5c642f993151fa901184e3ca91b384f151ca5adc92f35d0f8014556150291\" returns successfully" Jan 24 00:00:34.251148 kubelet[3217]: I0124 00:00:34.251092 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gjxrx" podStartSLOduration=63.251075062 podStartE2EDuration="1m3.251075062s" podCreationTimestamp="2026-01-23 23:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:00:34.250151022 +0000 UTC m=+68.409569148" watchObservedRunningTime="2026-01-24 00:00:34.251075062 +0000 UTC m=+68.410493188" Jan 24 00:00:34.313654 systemd-networkd[1363]: calif9eb2a285cc: Gained IPv6LL Jan 24 00:00:38.785133 containerd[1736]: time="2026-01-24T00:00:38.785057457Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:38.788987 containerd[1736]: time="2026-01-24T00:00:38.788896659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:00:38.788987 containerd[1736]: time="2026-01-24T00:00:38.788957779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:00:38.789253 kubelet[3217]: E0124 00:00:38.789139 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:00:38.789253 kubelet[3217]: E0124 00:00:38.789198 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:00:38.789749 kubelet[3217]: E0124 00:00:38.789607 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:38.789837 containerd[1736]: time="2026-01-24T00:00:38.789800419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:00:38.791157 kubelet[3217]: E0124 00:00:38.791114 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:00:39.832522 containerd[1736]: time="2026-01-24T00:00:39.832471685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:39.835521 containerd[1736]: time="2026-01-24T00:00:39.835484926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:00:39.835662 containerd[1736]: time="2026-01-24T00:00:39.835565526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:00:39.835736 kubelet[3217]: E0124 00:00:39.835693 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:00:39.835953 kubelet[3217]: E0124 00:00:39.835748 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:00:39.836468 kubelet[3217]: E0124 00:00:39.835987 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjlsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-754bb44d48-hhlr2_calico-system(a37d52d3-c228-4df6-b0fc-c5d23ff527d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:39.836609 containerd[1736]: time="2026-01-24T00:00:39.836132446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:00:39.837753 kubelet[3217]: E0124 00:00:39.837721 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:00:40.236787 kubelet[3217]: E0124 00:00:40.236263 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:00:40.369938 update_engine[1717]: I20260124 00:00:40.369854 1717 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:00:40.370500 update_engine[1717]: I20260124 00:00:40.370481 1717 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:00:40.370831 update_engine[1717]: I20260124 00:00:40.370810 1717 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:00:40.453172 update_engine[1717]: E20260124 00:00:40.453052 1717 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:00:40.453172 update_engine[1717]: I20260124 00:00:40.453143 1717 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:00:41.209383 containerd[1736]: time="2026-01-24T00:00:41.209296553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:41.212050 containerd[1736]: time="2026-01-24T00:00:41.211981834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:00:41.212050 containerd[1736]: time="2026-01-24T00:00:41.212023034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:00:41.212206 kubelet[3217]: E0124 00:00:41.212174 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:00:41.212794 kubelet[3217]: E0124 00:00:41.212216 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:00:41.212794 kubelet[3217]: E0124 00:00:41.212450 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:41.212905 containerd[1736]: time="2026-01-24T00:00:41.212490834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:42.194755 containerd[1736]: time="2026-01-24T00:00:42.194709310Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:42.198128 containerd[1736]: time="2026-01-24T00:00:42.198086112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:42.198205 containerd[1736]: time="2026-01-24T00:00:42.198184712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:42.198508 kubelet[3217]: E0124 00:00:42.198324 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:42.198508 kubelet[3217]: E0124 00:00:42.198371 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:42.198623 kubelet[3217]: E0124 00:00:42.198570 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz4fp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-bprcl_calico-apiserver(237e41c6-ec2d-4a8d-bb7d-ca837318e8f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:42.199074 containerd[1736]: time="2026-01-24T00:00:42.198881353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:00:42.200536 kubelet[3217]: E0124 00:00:42.200497 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:00:42.240883 kubelet[3217]: E0124 00:00:42.240730 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:00:42.492357 containerd[1736]: time="2026-01-24T00:00:42.492134775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:42.496023 containerd[1736]: time="2026-01-24T00:00:42.495923177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:00:42.496023 containerd[1736]: time="2026-01-24T00:00:42.495992697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:42.496203 kubelet[3217]: E0124 00:00:42.496116 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:42.496203 kubelet[3217]: E0124 00:00:42.496155 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:42.496424 kubelet[3217]: E0124 00:00:42.496368 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65kmp_calico-system(9390c20d-0be8-4dfe-954e-634e25852cb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:42.496821 containerd[1736]: time="2026-01-24T00:00:42.496785177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:00:42.498259 kubelet[3217]: E0124 00:00:42.498227 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:00:42.946169 containerd[1736]: time="2026-01-24T00:00:42.945969515Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:42.949545 containerd[1736]: time="2026-01-24T00:00:42.949433597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:00:42.949545 containerd[1736]: time="2026-01-24T00:00:42.949487037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:00:42.949697 kubelet[3217]: E0124 00:00:42.949664 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:00:42.949760 kubelet[3217]: E0124 00:00:42.949710 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:00:42.950026 kubelet[3217]: E0124 00:00:42.949935 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:838d2d7f116c4e34b727b27e353cd551,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:42.950126 containerd[1736]: time="2026-01-24T00:00:42.949991037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:00:42.952069 kubelet[3217]: E0124 00:00:42.951921 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:00:43.243089 kubelet[3217]: E0124 00:00:43.242772 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:00:43.280736 containerd[1736]: time="2026-01-24T00:00:43.280686477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:43.284585 containerd[1736]: time="2026-01-24T00:00:43.284526959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:00:43.284697 containerd[1736]: time="2026-01-24T00:00:43.284642199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:00:43.284821 kubelet[3217]: E0124 00:00:43.284782 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:00:43.284873 kubelet[3217]: E0124 00:00:43.284830 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:00:43.284991 kubelet[3217]: E0124 00:00:43.284951 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:43.287075 kubelet[3217]: E0124 00:00:43.287022 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:44.244962 kubelet[3217]: E0124 00:00:44.244911 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:45.924253 containerd[1736]: time="2026-01-24T00:00:45.924054679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:46.292234 containerd[1736]: time="2026-01-24T00:00:46.292119578Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:46.294842 containerd[1736]: time="2026-01-24T00:00:46.294800739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:46.294929 containerd[1736]: time="2026-01-24T00:00:46.294889979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:46.295092 kubelet[3217]: E0124 00:00:46.295056 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:46.296652 kubelet[3217]: E0124 00:00:46.295103 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:46.296652 kubelet[3217]: E0124 00:00:46.295254 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-qgr74_calico-apiserver(37f635e6-9d73-41e3-ac25-e030d9b2101d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:46.296864 kubelet[3217]: E0124 00:00:46.296811 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:00:50.370846 update_engine[1717]: I20260124 00:00:50.370664 1717 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:00:50.371205 update_engine[1717]: I20260124 00:00:50.370887 1717 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:00:50.371205 update_engine[1717]: I20260124 00:00:50.371094 1717 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:00:50.405233 update_engine[1717]: E20260124 00:00:50.405174 1717 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:00:50.405349 update_engine[1717]: I20260124 00:00:50.405288 1717 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:00:50.923255 containerd[1736]: time="2026-01-24T00:00:50.923212550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:00:52.007853 containerd[1736]: time="2026-01-24T00:00:52.007805566Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:52.010769 containerd[1736]: time="2026-01-24T00:00:52.010730808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:00:52.011428 containerd[1736]: time="2026-01-24T00:00:52.010839168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:00:52.011571 kubelet[3217]: E0124 00:00:52.010957 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:00:52.011571 kubelet[3217]: E0124 00:00:52.010999 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:00:52.011571 kubelet[3217]: E0124 00:00:52.011149 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjlsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-754bb44d48-hhlr2_calico-system(a37d52d3-c228-4df6-b0fc-c5d23ff527d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:52.012909 kubelet[3217]: E0124 00:00:52.012867 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:00:52.923072 containerd[1736]: time="2026-01-24T00:00:52.922842779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:53.217948 containerd[1736]: time="2026-01-24T00:00:53.217701245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:53.221981 containerd[1736]: time="2026-01-24T00:00:53.221150767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:53.221981 containerd[1736]: time="2026-01-24T00:00:53.221273767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:53.222091 kubelet[3217]: E0124 00:00:53.221471 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:53.222091 kubelet[3217]: E0124 00:00:53.221520 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:53.222091 kubelet[3217]: E0124 00:00:53.221641 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz4fp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-bprcl_calico-apiserver(237e41c6-ec2d-4a8d-bb7d-ca837318e8f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:53.223126 kubelet[3217]: E0124 00:00:53.223076 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:00:53.926062 containerd[1736]: time="2026-01-24T00:00:53.925227835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:00:54.162524 containerd[1736]: time="2026-01-24T00:00:54.162479553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:54.165449 containerd[1736]: time="2026-01-24T00:00:54.165407674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:00:54.165543 containerd[1736]: time="2026-01-24T00:00:54.165499314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:54.167292 kubelet[3217]: E0124 00:00:54.165695 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:54.167292 kubelet[3217]: E0124 00:00:54.165743 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:54.167292 kubelet[3217]: E0124 00:00:54.165864 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65kmp_calico-system(9390c20d-0be8-4dfe-954e-634e25852cb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:54.167572 kubelet[3217]: E0124 00:00:54.167544 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:00:54.924266 containerd[1736]: time="2026-01-24T00:00:54.924041610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:00:55.400024 containerd[1736]: time="2026-01-24T00:00:55.399852886Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:55.403786 containerd[1736]: time="2026-01-24T00:00:55.403670968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:00:55.403786 containerd[1736]: time="2026-01-24T00:00:55.403731568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:00:55.403940 kubelet[3217]: E0124 00:00:55.403865 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:00:55.403940 kubelet[3217]: E0124 00:00:55.403912 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:00:55.404239 kubelet[3217]: E0124 00:00:55.404045 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:55.407050 containerd[1736]: time="2026-01-24T00:00:55.406710409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:00:55.761482 containerd[1736]: time="2026-01-24T00:00:55.760619344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:55.763437 containerd[1736]: time="2026-01-24T00:00:55.763241426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:00:55.763437 containerd[1736]: time="2026-01-24T00:00:55.763314706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:00:55.763573 kubelet[3217]: E0124 00:00:55.763463 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:00:55.763573 kubelet[3217]: E0124 00:00:55.763522 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:00:55.763684 kubelet[3217]: E0124 00:00:55.763639 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:55.764982 kubelet[3217]: E0124 00:00:55.764911 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:00:56.924378 containerd[1736]: time="2026-01-24T00:00:56.924334201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:00:57.440902 containerd[1736]: time="2026-01-24T00:00:57.440729137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:57.444539 containerd[1736]: time="2026-01-24T00:00:57.444447819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:00:57.444539 containerd[1736]: time="2026-01-24T00:00:57.444505699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:00:57.444941 kubelet[3217]: E0124 00:00:57.444770 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:00:57.444941 kubelet[3217]: E0124 00:00:57.444833 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:00:57.447482 kubelet[3217]: E0124 00:00:57.445472 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:57.448620 kubelet[3217]: E0124 00:00:57.448572 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:00:58.922547 kubelet[3217]: E0124 00:00:58.922236 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:01:00.372487 update_engine[1717]: I20260124 00:01:00.372421 1717 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:01:00.372925 update_engine[1717]: I20260124 00:01:00.372689 1717 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:01:00.372925 update_engine[1717]: I20260124 00:01:00.372896 1717 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:01:00.383498 update_engine[1717]: E20260124 00:01:00.383460 1717 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:01:00.383573 update_engine[1717]: I20260124 00:01:00.383523 1717 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:01:00.383573 update_engine[1717]: I20260124 00:01:00.383531 1717 omaha_request_action.cc:617] Omaha request response: Jan 24 00:01:00.383616 update_engine[1717]: E20260124 00:01:00.383605 1717 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:01:00.383637 update_engine[1717]: I20260124 00:01:00.383621 1717 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:01:00.383637 update_engine[1717]: I20260124 00:01:00.383626 1717 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:01:00.383637 update_engine[1717]: I20260124 00:01:00.383631 1717 update_attempter.cc:306] Processing Done. Jan 24 00:01:00.383696 update_engine[1717]: E20260124 00:01:00.383643 1717 update_attempter.cc:619] Update failed. Jan 24 00:01:00.383696 update_engine[1717]: I20260124 00:01:00.383649 1717 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:01:00.383696 update_engine[1717]: I20260124 00:01:00.383653 1717 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:01:00.383696 update_engine[1717]: I20260124 00:01:00.383658 1717 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:01:00.383771 update_engine[1717]: I20260124 00:01:00.383719 1717 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:01:00.383771 update_engine[1717]: I20260124 00:01:00.383738 1717 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:01:00.383771 update_engine[1717]: I20260124 00:01:00.383744 1717 omaha_request_action.cc:272] Request: Jan 24 00:01:00.383771 update_engine[1717]: Jan 24 00:01:00.383771 update_engine[1717]: Jan 24 00:01:00.383771 update_engine[1717]: Jan 24 00:01:00.383771 update_engine[1717]: Jan 24 00:01:00.383771 update_engine[1717]: Jan 24 00:01:00.383771 update_engine[1717]: Jan 24 00:01:00.383771 update_engine[1717]: I20260124 00:01:00.383749 1717 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:01:00.384044 update_engine[1717]: I20260124 00:01:00.383879 1717 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:01:00.384044 update_engine[1717]: I20260124 00:01:00.384037 1717 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:01:00.384330 locksmithd[1774]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:01:00.393127 update_engine[1717]: E20260124 00:01:00.392956 1717 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393018 1717 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393025 1717 omaha_request_action.cc:617] Omaha request response: Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393033 1717 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393038 1717 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393042 1717 update_attempter.cc:306] Processing Done. Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393049 1717 update_attempter.cc:310] Error event sent. Jan 24 00:01:00.393127 update_engine[1717]: I20260124 00:01:00.393057 1717 update_check_scheduler.cc:74] Next update check in 47m48s Jan 24 00:01:00.393345 locksmithd[1774]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 24 00:01:06.922734 kubelet[3217]: E0124 00:01:06.922334 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:01:07.922817 kubelet[3217]: E0124 00:01:07.922765 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:01:08.925188 containerd[1736]: time="2026-01-24T00:01:08.925153484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:01:09.175357 containerd[1736]: time="2026-01-24T00:01:09.174772691Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:09.177884 containerd[1736]: time="2026-01-24T00:01:09.177803253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:01:09.177884 containerd[1736]: time="2026-01-24T00:01:09.177856173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:01:09.178411 kubelet[3217]: E0124 00:01:09.178359 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:01:09.178690 kubelet[3217]: E0124 00:01:09.178427 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:01:09.178690 kubelet[3217]: E0124 00:01:09.178547 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:838d2d7f116c4e34b727b27e353cd551,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:09.181977 kubelet[3217]: E0124 00:01:09.181938 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:01:09.587066 systemd[1]: Started sshd@7-10.200.20.20:22-10.200.16.10:34022.service - OpenSSH per-connection server daemon (10.200.16.10:34022). Jan 24 00:01:09.924274 kubelet[3217]: E0124 00:01:09.924159 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:01:09.926063 kubelet[3217]: E0124 00:01:09.925640 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:01:10.010866 sshd[5708]: Accepted publickey for core from 10.200.16.10 port 34022 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:10.013475 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:10.017278 systemd-logind[1714]: New session 10 of user core. Jan 24 00:01:10.023517 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:01:10.419596 sshd[5708]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:10.424702 systemd[1]: sshd@7-10.200.20.20:22-10.200.16.10:34022.service: Deactivated successfully. Jan 24 00:01:10.428364 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:01:10.430770 systemd-logind[1714]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:01:10.432021 systemd-logind[1714]: Removed session 10. Jan 24 00:01:12.923326 containerd[1736]: time="2026-01-24T00:01:12.923273601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:01:13.302936 containerd[1736]: time="2026-01-24T00:01:13.302388852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:13.305717 containerd[1736]: time="2026-01-24T00:01:13.305611854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:01:13.306174 containerd[1736]: time="2026-01-24T00:01:13.305655934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:01:13.306791 kubelet[3217]: E0124 00:01:13.306317 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:01:13.306791 kubelet[3217]: E0124 00:01:13.306364 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:01:13.306791 kubelet[3217]: E0124 00:01:13.306501 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-qgr74_calico-apiserver(37f635e6-9d73-41e3-ac25-e030d9b2101d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:13.307774 kubelet[3217]: E0124 00:01:13.307735 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:01:15.515762 systemd[1]: Started sshd@8-10.200.20.20:22-10.200.16.10:34024.service - OpenSSH per-connection server daemon (10.200.16.10:34024). Jan 24 00:01:16.003592 sshd[5725]: Accepted publickey for core from 10.200.16.10 port 34024 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:16.004927 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:16.011603 systemd-logind[1714]: New session 11 of user core. Jan 24 00:01:16.016126 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:01:16.432363 sshd[5725]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:16.436376 systemd[1]: sshd@8-10.200.20.20:22-10.200.16.10:34024.service: Deactivated successfully. Jan 24 00:01:16.438558 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:01:16.439932 systemd-logind[1714]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:01:16.442186 systemd-logind[1714]: Removed session 11. Jan 24 00:01:19.927036 containerd[1736]: time="2026-01-24T00:01:19.926663396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:01:20.303475 containerd[1736]: time="2026-01-24T00:01:20.303196001Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:20.306115 containerd[1736]: time="2026-01-24T00:01:20.306019602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:01:20.306115 containerd[1736]: time="2026-01-24T00:01:20.306060162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:01:20.306606 kubelet[3217]: E0124 00:01:20.306387 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:01:20.306606 kubelet[3217]: E0124 00:01:20.306455 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:01:20.307717 kubelet[3217]: E0124 00:01:20.306861 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjlsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-754bb44d48-hhlr2_calico-system(a37d52d3-c228-4df6-b0fc-c5d23ff527d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:20.308063 kubelet[3217]: E0124 00:01:20.308023 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:01:20.922848 containerd[1736]: time="2026-01-24T00:01:20.922541818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:01:21.188199 containerd[1736]: time="2026-01-24T00:01:21.188061843Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:21.194446 containerd[1736]: time="2026-01-24T00:01:21.194373606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:01:21.194643 containerd[1736]: time="2026-01-24T00:01:21.194429526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:01:21.194683 kubelet[3217]: E0124 00:01:21.194613 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:01:21.194683 kubelet[3217]: E0124 00:01:21.194664 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:01:21.195178 kubelet[3217]: E0124 00:01:21.194795 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65kmp_calico-system(9390c20d-0be8-4dfe-954e-634e25852cb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:21.196463 kubelet[3217]: E0124 00:01:21.196430 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:01:21.527654 systemd[1]: Started sshd@9-10.200.20.20:22-10.200.16.10:51756.service - OpenSSH per-connection server daemon (10.200.16.10:51756). Jan 24 00:01:21.924661 containerd[1736]: time="2026-01-24T00:01:21.924570684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:01:22.019420 sshd[5739]: Accepted publickey for core from 10.200.16.10 port 51756 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:22.020629 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:22.025706 systemd-logind[1714]: New session 12 of user core. Jan 24 00:01:22.032699 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:01:22.208772 containerd[1736]: time="2026-01-24T00:01:22.208663599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:22.215376 containerd[1736]: time="2026-01-24T00:01:22.215326082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:01:22.215572 containerd[1736]: time="2026-01-24T00:01:22.215420842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:01:22.215758 kubelet[3217]: E0124 00:01:22.215722 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:01:22.216109 kubelet[3217]: E0124 00:01:22.216089 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:01:22.217736 kubelet[3217]: E0124 00:01:22.217688 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz4fp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-bprcl_calico-apiserver(237e41c6-ec2d-4a8d-bb7d-ca837318e8f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:22.219013 kubelet[3217]: E0124 00:01:22.218984 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:01:22.439519 sshd[5739]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:22.442815 systemd[1]: sshd@9-10.200.20.20:22-10.200.16.10:51756.service: Deactivated successfully. Jan 24 00:01:22.448348 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:01:22.452074 systemd-logind[1714]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:01:22.453672 systemd-logind[1714]: Removed session 12. Jan 24 00:01:23.925083 containerd[1736]: time="2026-01-24T00:01:23.924874014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:01:24.247059 containerd[1736]: time="2026-01-24T00:01:24.246567629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:24.250040 containerd[1736]: time="2026-01-24T00:01:24.249860271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:01:24.250040 containerd[1736]: time="2026-01-24T00:01:24.249998471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:01:24.250671 kubelet[3217]: E0124 00:01:24.250486 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:01:24.250671 kubelet[3217]: E0124 00:01:24.250540 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:01:24.251069 kubelet[3217]: E0124 00:01:24.250750 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:24.251472 containerd[1736]: time="2026-01-24T00:01:24.251383151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:01:24.509891 containerd[1736]: time="2026-01-24T00:01:24.509760572Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:24.514891 containerd[1736]: time="2026-01-24T00:01:24.514848855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:01:24.514979 containerd[1736]: time="2026-01-24T00:01:24.514939095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:01:24.515140 kubelet[3217]: E0124 00:01:24.515096 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:01:24.515188 kubelet[3217]: E0124 00:01:24.515159 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:01:24.515385 kubelet[3217]: E0124 00:01:24.515340 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:24.515743 containerd[1736]: time="2026-01-24T00:01:24.515718135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:01:24.516918 kubelet[3217]: E0124 00:01:24.516876 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:01:24.765075 containerd[1736]: time="2026-01-24T00:01:24.764965951Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:24.768063 containerd[1736]: time="2026-01-24T00:01:24.768027353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:01:24.768140 containerd[1736]: time="2026-01-24T00:01:24.768113993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:01:24.768484 kubelet[3217]: E0124 00:01:24.768269 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:01:24.768484 kubelet[3217]: E0124 00:01:24.768316 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:01:24.768484 kubelet[3217]: E0124 00:01:24.768440 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:24.769622 kubelet[3217]: E0124 00:01:24.769583 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:01:24.923447 kubelet[3217]: E0124 00:01:24.923157 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:01:26.078231 containerd[1736]: time="2026-01-24T00:01:26.077938666Z" level=info msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.109 [WARNING][5784] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7", ResourceVersion:"1354", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe", Pod:"calico-apiserver-5dd9d484d4-bprcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ef39aeb938", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.110 [INFO][5784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.110 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" iface="eth0" netns="" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.110 [INFO][5784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.110 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.128 [INFO][5791] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.128 [INFO][5791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.129 [INFO][5791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.138 [WARNING][5791] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.138 [INFO][5791] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.140 [INFO][5791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.143862 containerd[1736]: 2026-01-24 00:01:26.142 [INFO][5784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.144293 containerd[1736]: time="2026-01-24T00:01:26.143905582Z" level=info msg="TearDown network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" successfully" Jan 24 00:01:26.144293 containerd[1736]: time="2026-01-24T00:01:26.143930182Z" level=info msg="StopPodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" returns successfully" Jan 24 00:01:26.145082 containerd[1736]: time="2026-01-24T00:01:26.144791183Z" level=info msg="RemovePodSandbox for \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" Jan 24 00:01:26.145082 containerd[1736]: time="2026-01-24T00:01:26.144819783Z" level=info msg="Forcibly stopping sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\"" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.176 [WARNING][5805] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"237e41c6-ec2d-4a8d-bb7d-ca837318e8f7", ResourceVersion:"1354", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"438e481c8ac3f3c851c1e0f61a03217c65ca280b644be1a6412c32d9ea3b3afe", Pod:"calico-apiserver-5dd9d484d4-bprcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ef39aeb938", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.176 [INFO][5805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.176 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" iface="eth0" netns="" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.176 [INFO][5805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.176 [INFO][5805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.194 [INFO][5812] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.194 [INFO][5812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.194 [INFO][5812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.202 [WARNING][5812] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.202 [INFO][5812] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" HandleID="k8s-pod-network.7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--bprcl-eth0" Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.204 [INFO][5812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.207341 containerd[1736]: 2026-01-24 00:01:26.205 [INFO][5805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf" Jan 24 00:01:26.207341 containerd[1736]: time="2026-01-24T00:01:26.207270977Z" level=info msg="TearDown network for sandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" successfully" Jan 24 00:01:26.225796 containerd[1736]: time="2026-01-24T00:01:26.225633747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:26.225796 containerd[1736]: time="2026-01-24T00:01:26.225707987Z" level=info msg="RemovePodSandbox \"7012f1da49fa9921d93e8ce0e30898367183722048ff3c814d848608a023d9bf\" returns successfully" Jan 24 00:01:26.226406 containerd[1736]: time="2026-01-24T00:01:26.226136467Z" level=info msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.260 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2024f333-ad36-464d-817d-816658048dd9", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c", Pod:"coredns-674b8bbfcf-bjd2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05bbb31b6ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.261 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.261 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" iface="eth0" netns="" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.261 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.261 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.294 [INFO][5833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.295 [INFO][5833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.295 [INFO][5833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.305 [WARNING][5833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.305 [INFO][5833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.306 [INFO][5833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.309602 containerd[1736]: 2026-01-24 00:01:26.308 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.310126 containerd[1736]: time="2026-01-24T00:01:26.309693353Z" level=info msg="TearDown network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" successfully" Jan 24 00:01:26.310126 containerd[1736]: time="2026-01-24T00:01:26.309717673Z" level=info msg="StopPodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" returns successfully" Jan 24 00:01:26.310947 containerd[1736]: time="2026-01-24T00:01:26.310650313Z" level=info msg="RemovePodSandbox for \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" Jan 24 00:01:26.310947 containerd[1736]: time="2026-01-24T00:01:26.310700953Z" level=info msg="Forcibly stopping sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\"" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.347 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2024f333-ad36-464d-817d-816658048dd9", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"d8d917ad8b72adb8d4e780339e3c12bf49e392f1a2cfacdb9b87d00cbf52a87c", Pod:"coredns-674b8bbfcf-bjd2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05bbb31b6ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.348 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.348 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" iface="eth0" netns="" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.348 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.348 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.365 [INFO][5854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.365 [INFO][5854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.365 [INFO][5854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.374 [WARNING][5854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.374 [INFO][5854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" HandleID="k8s-pod-network.ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--bjd2b-eth0" Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.375 [INFO][5854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.378945 containerd[1736]: 2026-01-24 00:01:26.377 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5" Jan 24 00:01:26.378945 containerd[1736]: time="2026-01-24T00:01:26.378889271Z" level=info msg="TearDown network for sandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" successfully" Jan 24 00:01:26.386873 containerd[1736]: time="2026-01-24T00:01:26.386245155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:26.386873 containerd[1736]: time="2026-01-24T00:01:26.386678555Z" level=info msg="RemovePodSandbox \"ba6347e5b8378f7166af41eb08b0a74f7b512fd94a76bcdb7bdc444c264982d5\" returns successfully" Jan 24 00:01:26.387272 containerd[1736]: time="2026-01-24T00:01:26.387112156Z" level=info msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.424 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0", GenerateName:"calico-kube-controllers-754bb44d48-", Namespace:"calico-system", SelfLink:"", UID:"a37d52d3-c228-4df6-b0fc-c5d23ff527d2", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754bb44d48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b", Pod:"calico-kube-controllers-754bb44d48-hhlr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95a6cfe1d8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.424 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.424 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" iface="eth0" netns="" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.424 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.424 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.447 [INFO][5876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.448 [INFO][5876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.448 [INFO][5876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.456 [WARNING][5876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.456 [INFO][5876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.458 [INFO][5876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.462161 containerd[1736]: 2026-01-24 00:01:26.460 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.462161 containerd[1736]: time="2026-01-24T00:01:26.461971718Z" level=info msg="TearDown network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" successfully" Jan 24 00:01:26.462161 containerd[1736]: time="2026-01-24T00:01:26.461994118Z" level=info msg="StopPodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" returns successfully" Jan 24 00:01:26.463137 containerd[1736]: time="2026-01-24T00:01:26.463111439Z" level=info msg="RemovePodSandbox for \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" Jan 24 00:01:26.463195 containerd[1736]: time="2026-01-24T00:01:26.463143079Z" level=info msg="Forcibly stopping sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\"" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.497 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0", GenerateName:"calico-kube-controllers-754bb44d48-", Namespace:"calico-system", SelfLink:"", UID:"a37d52d3-c228-4df6-b0fc-c5d23ff527d2", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754bb44d48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"d4d410ac5a453217816f58e85c0a68edc3db48b5cfcf7c2cdcce5fa7ad8af78b", Pod:"calico-kube-controllers-754bb44d48-hhlr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95a6cfe1d8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.497 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.497 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" iface="eth0" netns="" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.497 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.497 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.514 [INFO][5897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.514 [INFO][5897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.514 [INFO][5897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.523 [WARNING][5897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.523 [INFO][5897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" HandleID="k8s-pod-network.4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--kube--controllers--754bb44d48--hhlr2-eth0" Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.524 [INFO][5897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.527689 containerd[1736]: 2026-01-24 00:01:26.526 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e" Jan 24 00:01:26.529368 containerd[1736]: time="2026-01-24T00:01:26.527849036Z" level=info msg="TearDown network for sandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" successfully" Jan 24 00:01:26.537115 containerd[1736]: time="2026-01-24T00:01:26.537031441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:26.537312 containerd[1736]: time="2026-01-24T00:01:26.537104921Z" level=info msg="RemovePodSandbox \"4ccd0014c3660c989080663e46b6e9ebf3218e38382ef34ca25db2894cc3fc7e\" returns successfully" Jan 24 00:01:26.537746 containerd[1736]: time="2026-01-24T00:01:26.537723921Z" level=info msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.568 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9390c20d-0be8-4dfe-954e-634e25852cb2", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e", Pod:"goldmane-666569f655-65kmp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d9e12122c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.568 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.568 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" iface="eth0" netns="" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.569 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.569 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.589 [INFO][5919] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.589 [INFO][5919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.589 [INFO][5919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.599 [WARNING][5919] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.599 [INFO][5919] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.600 [INFO][5919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.604242 containerd[1736]: 2026-01-24 00:01:26.602 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.604669 containerd[1736]: time="2026-01-24T00:01:26.604420480Z" level=info msg="TearDown network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" successfully" Jan 24 00:01:26.604669 containerd[1736]: time="2026-01-24T00:01:26.604449800Z" level=info msg="StopPodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" returns successfully" Jan 24 00:01:26.604893 containerd[1736]: time="2026-01-24T00:01:26.604865400Z" level=info msg="RemovePodSandbox for \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" Jan 24 00:01:26.604927 containerd[1736]: time="2026-01-24T00:01:26.604904040Z" level=info msg="Forcibly stopping sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\"" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.636 [WARNING][5933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9390c20d-0be8-4dfe-954e-634e25852cb2", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"b613ccef9f09e1e7fea0113f023ab39d47a0f709b4b5dcfe8d03c5d5a72d138e", Pod:"goldmane-666569f655-65kmp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d9e12122c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.636 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.636 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" iface="eth0" netns="" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.636 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.636 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.656 [INFO][5940] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.656 [INFO][5940] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.656 [INFO][5940] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.664 [WARNING][5940] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.665 [INFO][5940] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" HandleID="k8s-pod-network.408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Workload="ci--4081.3.6--n--2a642b76b3-k8s-goldmane--666569f655--65kmp-eth0" Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.666 [INFO][5940] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.669447 containerd[1736]: 2026-01-24 00:01:26.667 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869" Jan 24 00:01:26.669819 containerd[1736]: time="2026-01-24T00:01:26.669446397Z" level=info msg="TearDown network for sandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" successfully" Jan 24 00:01:26.678861 containerd[1736]: time="2026-01-24T00:01:26.678822042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:26.678949 containerd[1736]: time="2026-01-24T00:01:26.678881722Z" level=info msg="RemovePodSandbox \"408ccf35837419d5c7c51e78625a0a721d890431daf3d0acff919d5a011c0869\" returns successfully" Jan 24 00:01:26.679319 containerd[1736]: time="2026-01-24T00:01:26.679295082Z" level=info msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.714 [WARNING][5954] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37f635e6-9d73-41e3-ac25-e030d9b2101d", ResourceVersion:"1386", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88", Pod:"calico-apiserver-5dd9d484d4-qgr74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cc3b1e5ef9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.715 [INFO][5954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.715 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" iface="eth0" netns="" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.715 [INFO][5954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.715 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.734 [INFO][5961] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.735 [INFO][5961] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.735 [INFO][5961] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.743 [WARNING][5961] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.744 [INFO][5961] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.745 [INFO][5961] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.748662 containerd[1736]: 2026-01-24 00:01:26.747 [INFO][5954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.749105 containerd[1736]: time="2026-01-24T00:01:26.748699122Z" level=info msg="TearDown network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" successfully" Jan 24 00:01:26.749105 containerd[1736]: time="2026-01-24T00:01:26.748723322Z" level=info msg="StopPodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" returns successfully" Jan 24 00:01:26.749192 containerd[1736]: time="2026-01-24T00:01:26.749169962Z" level=info msg="RemovePodSandbox for \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" Jan 24 00:01:26.749227 containerd[1736]: time="2026-01-24T00:01:26.749198722Z" level=info msg="Forcibly stopping sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\"" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.782 [WARNING][5977] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0", GenerateName:"calico-apiserver-5dd9d484d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37f635e6-9d73-41e3-ac25-e030d9b2101d", ResourceVersion:"1386", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd9d484d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"00617fc715fe5b7201125ef41a6cac7d22061ae8905cefb9a5050d2ed6f21a88", Pod:"calico-apiserver-5dd9d484d4-qgr74", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cc3b1e5ef9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.782 [INFO][5977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.782 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" iface="eth0" netns="" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.782 [INFO][5977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.782 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.806 [INFO][5984] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.806 [INFO][5984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.806 [INFO][5984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.818 [WARNING][5984] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.818 [INFO][5984] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" HandleID="k8s-pod-network.3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Workload="ci--4081.3.6--n--2a642b76b3-k8s-calico--apiserver--5dd9d484d4--qgr74-eth0" Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.820 [INFO][5984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.825459 containerd[1736]: 2026-01-24 00:01:26.823 [INFO][5977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3" Jan 24 00:01:26.825932 containerd[1736]: time="2026-01-24T00:01:26.825511486Z" level=info msg="TearDown network for sandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" successfully" Jan 24 00:01:26.834565 containerd[1736]: time="2026-01-24T00:01:26.834480171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:26.834680 containerd[1736]: time="2026-01-24T00:01:26.834634211Z" level=info msg="RemovePodSandbox \"3ec36036885db18033c75ff47d47c27ad50160917e0fe2151ff54c6ec9f079e3\" returns successfully" Jan 24 00:01:26.835243 containerd[1736]: time="2026-01-24T00:01:26.835062251Z" level=info msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.877 [WARNING][5998] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1900a277-348f-4eb2-aa7c-7d2406a64ec8", ResourceVersion:"1368", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf", Pod:"csi-node-driver-mmrrm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34346cb78eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.877 [INFO][5998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.877 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" iface="eth0" netns="" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.877 [INFO][5998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.877 [INFO][5998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.905 [INFO][6005] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.905 [INFO][6005] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.905 [INFO][6005] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.916 [WARNING][6005] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.918 [INFO][6005] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.920 [INFO][6005] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:26.925565 containerd[1736]: 2026-01-24 00:01:26.923 [INFO][5998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:26.925946 containerd[1736]: time="2026-01-24T00:01:26.925559423Z" level=info msg="TearDown network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" successfully" Jan 24 00:01:26.925946 containerd[1736]: time="2026-01-24T00:01:26.925586143Z" level=info msg="StopPodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" returns successfully" Jan 24 00:01:26.926971 containerd[1736]: time="2026-01-24T00:01:26.926022703Z" level=info msg="RemovePodSandbox for \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" Jan 24 00:01:26.926971 containerd[1736]: time="2026-01-24T00:01:26.926053863Z" level=info msg="Forcibly stopping sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\"" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:26.964 [WARNING][6020] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1900a277-348f-4eb2-aa7c-7d2406a64ec8", ResourceVersion:"1368", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"842e232da941af261a5d3913591163ddec3c1a2455b149fce5e80ec8a6e6d3bf", Pod:"csi-node-driver-mmrrm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34346cb78eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:26.964 [INFO][6020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:26.964 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" iface="eth0" netns="" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:26.964 [INFO][6020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:26.964 [INFO][6020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.000 [INFO][6028] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.000 [INFO][6028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.000 [INFO][6028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.011 [WARNING][6028] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.011 [INFO][6028] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" HandleID="k8s-pod-network.8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Workload="ci--4081.3.6--n--2a642b76b3-k8s-csi--node--driver--mmrrm-eth0" Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.012 [INFO][6028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:27.019020 containerd[1736]: 2026-01-24 00:01:27.014 [INFO][6020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985" Jan 24 00:01:27.019457 containerd[1736]: time="2026-01-24T00:01:27.019054756Z" level=info msg="TearDown network for sandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" successfully" Jan 24 00:01:27.026525 containerd[1736]: time="2026-01-24T00:01:27.026468600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:27.026667 containerd[1736]: time="2026-01-24T00:01:27.026534000Z" level=info msg="RemovePodSandbox \"8dd73b5fa0a06a3d0699a0cb6f06fddb7c28a8571be3ab67441e765bc2026985\" returns successfully" Jan 24 00:01:27.027166 containerd[1736]: time="2026-01-24T00:01:27.026933641Z" level=info msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.060 [WARNING][6042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cabee2a-2179-450d-babf-843d70721def", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1", Pod:"coredns-674b8bbfcf-gjxrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9eb2a285cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.060 [INFO][6042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.060 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" iface="eth0" netns="" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.060 [INFO][6042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.060 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.083 [INFO][6049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.083 [INFO][6049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.083 [INFO][6049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.092 [WARNING][6049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.092 [INFO][6049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.094 [INFO][6049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:27.099362 containerd[1736]: 2026-01-24 00:01:27.096 [INFO][6042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.099969 containerd[1736]: time="2026-01-24T00:01:27.099416802Z" level=info msg="TearDown network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" successfully" Jan 24 00:01:27.099969 containerd[1736]: time="2026-01-24T00:01:27.099441642Z" level=info msg="StopPodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" returns successfully" Jan 24 00:01:27.101832 containerd[1736]: time="2026-01-24T00:01:27.100438043Z" level=info msg="RemovePodSandbox for \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" Jan 24 00:01:27.101832 containerd[1736]: time="2026-01-24T00:01:27.100468803Z" level=info msg="Forcibly stopping sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\"" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.140 [WARNING][6063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8cabee2a-2179-450d-babf-843d70721def", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2a642b76b3", ContainerID:"5729d2ade87d17bf3fdc2b21e1ce5f28b91b3c249d5bee121d4f78e0b24014b1", Pod:"coredns-674b8bbfcf-gjxrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9eb2a285cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.141 [INFO][6063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.141 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" iface="eth0" netns="" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.141 [INFO][6063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.141 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.163 [INFO][6070] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.163 [INFO][6070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.163 [INFO][6070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.172 [WARNING][6070] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.172 [INFO][6070] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" HandleID="k8s-pod-network.88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Workload="ci--4081.3.6--n--2a642b76b3-k8s-coredns--674b8bbfcf--gjxrx-eth0" Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.173 [INFO][6070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:01:27.177042 containerd[1736]: 2026-01-24 00:01:27.175 [INFO][6063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121" Jan 24 00:01:27.177042 containerd[1736]: time="2026-01-24T00:01:27.176906446Z" level=info msg="TearDown network for sandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" successfully" Jan 24 00:01:27.185898 containerd[1736]: time="2026-01-24T00:01:27.185850571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:01:27.185988 containerd[1736]: time="2026-01-24T00:01:27.185911931Z" level=info msg="RemovePodSandbox \"88508e374ae60b6f307623f9da0a9c925c1b0c897eb1ae2977bdd0fdf509d121\" returns successfully" Jan 24 00:01:27.530363 systemd[1]: Started sshd@10-10.200.20.20:22-10.200.16.10:51766.service - OpenSSH per-connection server daemon (10.200.16.10:51766). Jan 24 00:01:27.982524 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 51766 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:27.984562 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:27.988570 systemd-logind[1714]: New session 13 of user core. Jan 24 00:01:27.993550 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:01:28.429247 sshd[6079]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:28.434965 systemd[1]: sshd@10-10.200.20.20:22-10.200.16.10:51766.service: Deactivated successfully. Jan 24 00:01:28.435002 systemd-logind[1714]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:01:28.439881 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:01:28.443941 systemd-logind[1714]: Removed session 13. Jan 24 00:01:28.516637 systemd[1]: Started sshd@11-10.200.20.20:22-10.200.16.10:51770.service - OpenSSH per-connection server daemon (10.200.16.10:51770). Jan 24 00:01:28.932608 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 51770 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:28.934489 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:28.940421 systemd-logind[1714]: New session 14 of user core. Jan 24 00:01:28.945773 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:01:29.361679 sshd[6097]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:29.365046 systemd[1]: sshd@11-10.200.20.20:22-10.200.16.10:51770.service: Deactivated successfully. Jan 24 00:01:29.367019 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:01:29.367968 systemd-logind[1714]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:01:29.368843 systemd-logind[1714]: Removed session 14. Jan 24 00:01:29.454760 systemd[1]: Started sshd@12-10.200.20.20:22-10.200.16.10:51772.service - OpenSSH per-connection server daemon (10.200.16.10:51772). Jan 24 00:01:29.906542 sshd[6109]: Accepted publickey for core from 10.200.16.10 port 51772 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:29.907939 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:29.913726 systemd-logind[1714]: New session 15 of user core. Jan 24 00:01:29.919629 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:01:30.354599 sshd[6109]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:30.361902 systemd[1]: sshd@12-10.200.20.20:22-10.200.16.10:51772.service: Deactivated successfully. Jan 24 00:01:30.365542 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:01:30.367120 systemd-logind[1714]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:01:30.368543 systemd-logind[1714]: Removed session 15. Jan 24 00:01:30.922348 kubelet[3217]: E0124 00:01:30.922136 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:01:33.925420 kubelet[3217]: E0124 00:01:33.924666 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:01:35.448643 systemd[1]: Started sshd@13-10.200.20.20:22-10.200.16.10:34254.service - OpenSSH per-connection server daemon (10.200.16.10:34254). Jan 24 00:01:35.924911 kubelet[3217]: E0124 00:01:35.924835 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:01:35.926387 kubelet[3217]: E0124 00:01:35.925043 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:01:35.940144 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 34254 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:35.941924 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:35.947364 systemd-logind[1714]: New session 16 of user core. Jan 24 00:01:35.953546 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:01:36.443184 sshd[6125]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:36.446530 systemd[1]: sshd@13-10.200.20.20:22-10.200.16.10:34254.service: Deactivated successfully. Jan 24 00:01:36.449067 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:01:36.450095 systemd-logind[1714]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:01:36.451166 systemd-logind[1714]: Removed session 16. Jan 24 00:01:36.530915 systemd[1]: Started sshd@14-10.200.20.20:22-10.200.16.10:34258.service - OpenSSH per-connection server daemon (10.200.16.10:34258). Jan 24 00:01:36.924166 kubelet[3217]: E0124 00:01:36.923589 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:01:37.026150 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 34258 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:37.028179 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:37.032579 systemd-logind[1714]: New session 17 of user core. Jan 24 00:01:37.038093 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:01:37.574114 sshd[6137]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:37.577415 systemd[1]: sshd@14-10.200.20.20:22-10.200.16.10:34258.service: Deactivated successfully. Jan 24 00:01:37.579447 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:01:37.580231 systemd-logind[1714]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:01:37.581061 systemd-logind[1714]: Removed session 17. Jan 24 00:01:37.663851 systemd[1]: Started sshd@15-10.200.20.20:22-10.200.16.10:34262.service - OpenSSH per-connection server daemon (10.200.16.10:34262). Jan 24 00:01:37.923552 kubelet[3217]: E0124 00:01:37.923513 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:01:38.157847 sshd[6148]: Accepted publickey for core from 10.200.16.10 port 34262 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:38.158941 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:38.168660 systemd-logind[1714]: New session 18 of user core. Jan 24 00:01:38.176572 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:01:39.333828 sshd[6148]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:39.339642 systemd[1]: sshd@15-10.200.20.20:22-10.200.16.10:34262.service: Deactivated successfully. Jan 24 00:01:39.345932 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:01:39.346873 systemd-logind[1714]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:01:39.347993 systemd-logind[1714]: Removed session 18. Jan 24 00:01:39.420743 systemd[1]: Started sshd@16-10.200.20.20:22-10.200.16.10:34276.service - OpenSSH per-connection server daemon (10.200.16.10:34276). Jan 24 00:01:39.866895 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 34276 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:39.868490 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:39.875687 systemd-logind[1714]: New session 19 of user core. Jan 24 00:01:39.878561 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:01:40.445768 sshd[6170]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:40.449913 systemd[1]: sshd@16-10.200.20.20:22-10.200.16.10:34276.service: Deactivated successfully. Jan 24 00:01:40.453351 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:01:40.453989 systemd-logind[1714]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:01:40.455369 systemd-logind[1714]: Removed session 19. Jan 24 00:01:40.533343 systemd[1]: Started sshd@17-10.200.20.20:22-10.200.16.10:52080.service - OpenSSH per-connection server daemon (10.200.16.10:52080). Jan 24 00:01:41.023656 sshd[6181]: Accepted publickey for core from 10.200.16.10 port 52080 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:41.024563 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:41.032569 systemd-logind[1714]: New session 20 of user core. Jan 24 00:01:41.035725 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:01:41.448882 sshd[6181]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:41.453145 systemd[1]: sshd@17-10.200.20.20:22-10.200.16.10:52080.service: Deactivated successfully. Jan 24 00:01:41.456259 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:01:41.458944 systemd-logind[1714]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:01:41.461028 systemd-logind[1714]: Removed session 20. Jan 24 00:01:44.923265 kubelet[3217]: E0124 00:01:44.923175 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:01:45.925506 kubelet[3217]: E0124 00:01:45.923572 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:01:46.538652 systemd[1]: Started sshd@18-10.200.20.20:22-10.200.16.10:52090.service - OpenSSH per-connection server daemon (10.200.16.10:52090). Jan 24 00:01:46.986524 sshd[6202]: Accepted publickey for core from 10.200.16.10 port 52090 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:46.987024 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:46.991252 systemd-logind[1714]: New session 21 of user core. Jan 24 00:01:46.994626 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:01:47.381724 sshd[6202]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:47.386570 systemd[1]: sshd@18-10.200.20.20:22-10.200.16.10:52090.service: Deactivated successfully. Jan 24 00:01:47.390770 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:01:47.392076 systemd-logind[1714]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:01:47.393482 systemd-logind[1714]: Removed session 21. Jan 24 00:01:47.926492 kubelet[3217]: E0124 00:01:47.926445 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:01:49.923576 kubelet[3217]: E0124 00:01:49.923509 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:01:49.924311 kubelet[3217]: E0124 00:01:49.924276 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:01:49.924637 containerd[1736]: time="2026-01-24T00:01:49.924603477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:01:50.235286 containerd[1736]: time="2026-01-24T00:01:50.234857748Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:01:50.238612 containerd[1736]: time="2026-01-24T00:01:50.238518790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:01:50.238612 containerd[1736]: time="2026-01-24T00:01:50.238544670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:01:50.238800 kubelet[3217]: E0124 00:01:50.238757 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:01:50.238886 kubelet[3217]: E0124 00:01:50.238806 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:01:50.239265 kubelet[3217]: E0124 00:01:50.238936 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:838d2d7f116c4e34b727b27e353cd551,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hj8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dfdc764f5-mkdn7_calico-system(f462240a-0a7b-4fa9-a623-1df80e2e9a5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:01:50.240640 kubelet[3217]: E0124 00:01:50.240573 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:01:52.461482 systemd[1]: Started sshd@19-10.200.20.20:22-10.200.16.10:58816.service - OpenSSH per-connection server daemon (10.200.16.10:58816). Jan 24 00:01:52.918998 sshd[6215]: Accepted publickey for core from 10.200.16.10 port 58816 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:52.920777 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:52.924357 systemd-logind[1714]: New session 22 of user core. Jan 24 00:01:52.932569 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:01:53.336213 sshd[6215]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:53.340683 systemd[1]: sshd@19-10.200.20.20:22-10.200.16.10:58816.service: Deactivated successfully. Jan 24 00:01:53.342376 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:01:53.345764 systemd-logind[1714]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:01:53.346525 systemd-logind[1714]: Removed session 22. Jan 24 00:01:58.435027 systemd[1]: Started sshd@20-10.200.20.20:22-10.200.16.10:58832.service - OpenSSH per-connection server daemon (10.200.16.10:58832). Jan 24 00:01:58.925208 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 58832 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:58.927971 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:58.934624 systemd-logind[1714]: New session 23 of user core. Jan 24 00:01:58.939240 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:01:59.364376 sshd[6251]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:59.367011 systemd-logind[1714]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:01:59.367631 systemd[1]: sshd@20-10.200.20.20:22-10.200.16.10:58832.service: Deactivated successfully. Jan 24 00:01:59.369227 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:01:59.372407 systemd-logind[1714]: Removed session 23. Jan 24 00:01:59.923540 kubelet[3217]: E0124 00:01:59.923467 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:01:59.924907 kubelet[3217]: E0124 00:01:59.924772 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:02:00.921906 kubelet[3217]: E0124 00:02:00.921848 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2" Jan 24 00:02:02.924060 containerd[1736]: time="2026-01-24T00:02:02.923823757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:02:02.924872 kubelet[3217]: E0124 00:02:02.924828 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dfdc764f5-mkdn7" podUID="f462240a-0a7b-4fa9-a623-1df80e2e9a5c" Jan 24 00:02:03.174562 containerd[1736]: time="2026-01-24T00:02:03.174318881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:02:03.177285 containerd[1736]: time="2026-01-24T00:02:03.177186642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:02:03.177285 containerd[1736]: time="2026-01-24T00:02:03.177255602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:02:03.177467 kubelet[3217]: E0124 00:02:03.177428 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:02:03.177546 kubelet[3217]: E0124 00:02:03.177471 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:02:03.177932 kubelet[3217]: E0124 00:02:03.177602 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz4fp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-bprcl_calico-apiserver(237e41c6-ec2d-4a8d-bb7d-ca837318e8f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:02:03.179140 kubelet[3217]: E0124 00:02:03.179113 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-bprcl" podUID="237e41c6-ec2d-4a8d-bb7d-ca837318e8f7" Jan 24 00:02:04.447499 systemd[1]: Started sshd@21-10.200.20.20:22-10.200.16.10:44914.service - OpenSSH per-connection server daemon (10.200.16.10:44914). Jan 24 00:02:04.896575 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 44914 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:02:04.897897 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:02:04.901495 systemd-logind[1714]: New session 24 of user core. Jan 24 00:02:04.907549 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:02:04.923147 containerd[1736]: time="2026-01-24T00:02:04.922831022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:02:05.183617 containerd[1736]: time="2026-01-24T00:02:05.183359951Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:02:05.186831 containerd[1736]: time="2026-01-24T00:02:05.186686352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:02:05.186831 containerd[1736]: time="2026-01-24T00:02:05.186797752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:02:05.187653 kubelet[3217]: E0124 00:02:05.187125 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:02:05.187653 kubelet[3217]: E0124 00:02:05.187174 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:02:05.187653 kubelet[3217]: E0124 00:02:05.187304 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwcqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5dd9d484d4-qgr74_calico-apiserver(37f635e6-9d73-41e3-ac25-e030d9b2101d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:02:05.189292 kubelet[3217]: E0124 00:02:05.189247 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dd9d484d4-qgr74" podUID="37f635e6-9d73-41e3-ac25-e030d9b2101d" Jan 24 00:02:05.298255 sshd[6288]: pam_unix(sshd:session): session closed for user core Jan 24 00:02:05.302119 systemd-logind[1714]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:02:05.304243 systemd[1]: sshd@21-10.200.20.20:22-10.200.16.10:44914.service: Deactivated successfully. Jan 24 00:02:05.307612 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:02:05.309619 systemd-logind[1714]: Removed session 24. Jan 24 00:02:10.396492 systemd[1]: Started sshd@22-10.200.20.20:22-10.200.16.10:52854.service - OpenSSH per-connection server daemon (10.200.16.10:52854). Jan 24 00:02:10.857896 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 52854 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:02:10.859263 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:02:10.863154 systemd-logind[1714]: New session 25 of user core. Jan 24 00:02:10.869695 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:02:10.922360 containerd[1736]: time="2026-01-24T00:02:10.922320556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:02:11.226957 containerd[1736]: time="2026-01-24T00:02:11.226837301Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:02:11.229804 containerd[1736]: time="2026-01-24T00:02:11.229763342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:02:11.229889 containerd[1736]: time="2026-01-24T00:02:11.229864182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:02:11.231350 kubelet[3217]: E0124 00:02:11.230002 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:02:11.231350 kubelet[3217]: E0124 00:02:11.230059 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:02:11.231350 kubelet[3217]: E0124 00:02:11.230184 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjlsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-754bb44d48-hhlr2_calico-system(a37d52d3-c228-4df6-b0fc-c5d23ff527d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:02:11.231980 kubelet[3217]: E0124 00:02:11.231920 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754bb44d48-hhlr2" podUID="a37d52d3-c228-4df6-b0fc-c5d23ff527d2" Jan 24 00:02:11.246716 sshd[6301]: pam_unix(sshd:session): session closed for user core Jan 24 00:02:11.250068 systemd[1]: sshd@22-10.200.20.20:22-10.200.16.10:52854.service: Deactivated successfully. Jan 24 00:02:11.251821 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:02:11.252613 systemd-logind[1714]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:02:11.253366 systemd-logind[1714]: Removed session 25. Jan 24 00:02:11.926411 containerd[1736]: time="2026-01-24T00:02:11.924788832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:02:12.342521 containerd[1736]: time="2026-01-24T00:02:12.342378470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:02:12.347286 containerd[1736]: time="2026-01-24T00:02:12.347240713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:02:12.347723 containerd[1736]: time="2026-01-24T00:02:12.347361953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:02:12.347795 kubelet[3217]: E0124 00:02:12.347482 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:02:12.347795 kubelet[3217]: E0124 00:02:12.347538 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:02:12.347795 kubelet[3217]: E0124 00:02:12.347666 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:02:12.350227 containerd[1736]: time="2026-01-24T00:02:12.350039354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:02:12.633221 containerd[1736]: time="2026-01-24T00:02:12.633090888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:02:12.636116 containerd[1736]: time="2026-01-24T00:02:12.636065570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:02:12.636232 containerd[1736]: time="2026-01-24T00:02:12.636170650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:02:12.636375 kubelet[3217]: E0124 00:02:12.636336 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:02:12.636476 kubelet[3217]: E0124 00:02:12.636384 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:02:12.636744 kubelet[3217]: E0124 00:02:12.636512 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrdkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mmrrm_calico-system(1900a277-348f-4eb2-aa7c-7d2406a64ec8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:02:12.637714 kubelet[3217]: E0124 00:02:12.637686 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mmrrm" podUID="1900a277-348f-4eb2-aa7c-7d2406a64ec8" Jan 24 00:02:13.924021 containerd[1736]: time="2026-01-24T00:02:13.923980821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:02:14.181889 containerd[1736]: time="2026-01-24T00:02:14.181651784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:02:14.185199 containerd[1736]: time="2026-01-24T00:02:14.185106665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:02:14.185199 containerd[1736]: time="2026-01-24T00:02:14.185179546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:02:14.185348 kubelet[3217]: E0124 00:02:14.185297 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:02:14.185348 kubelet[3217]: E0124 00:02:14.185339 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:02:14.185663 kubelet[3217]: E0124 00:02:14.185470 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-65kmp_calico-system(9390c20d-0be8-4dfe-954e-634e25852cb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:02:14.186919 kubelet[3217]: E0124 00:02:14.186883 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-65kmp" podUID="9390c20d-0be8-4dfe-954e-634e25852cb2"