Jan 15 23:49:29.078493 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 15 23:49:29.078512 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 15 22:06:59 -00 2026 Jan 15 23:49:29.078518 kernel: KASLR enabled Jan 15 23:49:29.078522 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 23:49:29.078526 kernel: printk: legacy bootconsole [pl11] enabled Jan 15 23:49:29.078531 kernel: efi: EFI v2.7 by EDK II Jan 15 23:49:29.078536 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 15 23:49:29.078540 kernel: random: crng init done Jan 15 23:49:29.078544 kernel: secureboot: Secure boot disabled Jan 15 23:49:29.078548 kernel: ACPI: Early table checksum verification disabled Jan 15 23:49:29.078552 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 15 23:49:29.078556 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078559 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078563 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 23:49:29.078569 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078574 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078578 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078582 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078586 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078591 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078596 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 23:49:29.078600 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 23:49:29.078604 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 23:49:29.078608 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 15 23:49:29.078613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 15 23:49:29.078617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 15 23:49:29.078621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 15 23:49:29.078625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 15 23:49:29.078630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 15 23:49:29.078634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 15 23:49:29.078639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 15 23:49:29.078643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 15 23:49:29.078647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 15 23:49:29.078652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 15 23:49:29.078656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 15 23:49:29.078660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 15 23:49:29.078664 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 15 23:49:29.078668 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 15 23:49:29.078673 kernel: Zone ranges: Jan 15 23:49:29.078677 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 23:49:29.078684 kernel: DMA32 empty Jan 15 23:49:29.078688 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 23:49:29.078693 kernel: Device empty Jan 15 23:49:29.078697 kernel: Movable zone start for each node Jan 15 23:49:29.078701 kernel: Early memory node ranges Jan 15 23:49:29.078706 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 23:49:29.078711 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 15 23:49:29.078715 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 15 23:49:29.078720 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 15 23:49:29.078724 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 15 23:49:29.078729 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 15 23:49:29.078733 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 23:49:29.078737 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 23:49:29.078742 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 23:49:29.078746 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 15 23:49:29.078750 kernel: psci: probing for conduit method from ACPI. Jan 15 23:49:29.078755 kernel: psci: PSCIv1.3 detected in firmware. Jan 15 23:49:29.078759 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 23:49:29.078764 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 23:49:29.078769 kernel: psci: SMC Calling Convention v1.4 Jan 15 23:49:29.078773 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 23:49:29.078777 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 23:49:29.078782 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 15 23:49:29.078786 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 15 23:49:29.078791 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 23:49:29.078795 kernel: Detected PIPT I-cache on CPU0 Jan 15 23:49:29.078799 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 15 23:49:29.078804 kernel: CPU features: detected: GIC system register CPU interface Jan 15 23:49:29.078808 kernel: CPU features: detected: Spectre-v4 Jan 15 23:49:29.078812 kernel: CPU features: detected: Spectre-BHB Jan 15 23:49:29.078818 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 23:49:29.078822 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 23:49:29.078827 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 15 23:49:29.078831 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 23:49:29.078835 kernel: alternatives: applying boot alternatives Jan 15 23:49:29.078841 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:49:29.078845 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 23:49:29.078850 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 23:49:29.078854 kernel: Fallback order for Node 0: 0 Jan 15 23:49:29.078858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 15 23:49:29.078864 kernel: Policy zone: Normal Jan 15 23:49:29.078868 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 23:49:29.078872 kernel: software IO TLB: area num 2. Jan 15 23:49:29.078877 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 15 23:49:29.078881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 23:49:29.078885 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 23:49:29.078891 kernel: rcu: RCU event tracing is enabled. Jan 15 23:49:29.078895 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 23:49:29.078900 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 23:49:29.078904 kernel: Tracing variant of Tasks RCU enabled. Jan 15 23:49:29.078909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 23:49:29.078913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 23:49:29.078918 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:49:29.078923 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:49:29.078927 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 23:49:29.078932 kernel: GICv3: 960 SPIs implemented Jan 15 23:49:29.078936 kernel: GICv3: 0 Extended SPIs implemented Jan 15 23:49:29.078940 kernel: Root IRQ handler: gic_handle_irq Jan 15 23:49:29.078945 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 15 23:49:29.078949 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 15 23:49:29.078953 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 23:49:29.078958 kernel: ITS: No ITS available, not enabling LPIs Jan 15 23:49:29.078962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 23:49:29.078968 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 15 23:49:29.078972 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 23:49:29.078977 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 15 23:49:29.078981 kernel: Console: colour dummy device 80x25 Jan 15 23:49:29.078986 kernel: printk: legacy console [tty1] enabled Jan 15 23:49:29.078991 kernel: ACPI: Core revision 20240827 Jan 15 23:49:29.078995 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 15 23:49:29.079000 kernel: pid_max: default: 32768 minimum: 301 Jan 15 23:49:29.079004 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 23:49:29.079009 kernel: landlock: Up and running. Jan 15 23:49:29.079014 kernel: SELinux: Initializing. Jan 15 23:49:29.079019 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:49:29.079023 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:49:29.079028 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 15 23:49:29.079033 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 15 23:49:29.079041 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 23:49:29.079047 kernel: rcu: Hierarchical SRCU implementation. Jan 15 23:49:29.079052 kernel: rcu: Max phase no-delay instances is 400. Jan 15 23:49:29.079056 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 23:49:29.079061 kernel: Remapping and enabling EFI services. Jan 15 23:49:29.079066 kernel: smp: Bringing up secondary CPUs ... Jan 15 23:49:29.079070 kernel: Detected PIPT I-cache on CPU1 Jan 15 23:49:29.079076 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 23:49:29.079081 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 15 23:49:29.079086 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 23:49:29.079091 kernel: SMP: Total of 2 processors activated. Jan 15 23:49:29.081215 kernel: CPU: All CPU(s) started at EL1 Jan 15 23:49:29.081227 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 23:49:29.081233 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 23:49:29.081238 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 23:49:29.081243 kernel: CPU features: detected: Common not Private translations Jan 15 23:49:29.081248 kernel: CPU features: detected: CRC32 instructions Jan 15 23:49:29.081254 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 15 23:49:29.081258 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 23:49:29.081263 kernel: CPU features: detected: LSE atomic instructions Jan 15 23:49:29.081268 kernel: CPU features: detected: Privileged Access Never Jan 15 23:49:29.081274 kernel: CPU features: detected: Speculation barrier (SB) Jan 15 23:49:29.081279 kernel: CPU features: detected: TLB range maintenance instructions Jan 15 23:49:29.081284 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 15 23:49:29.081289 kernel: CPU features: detected: Scalable Vector Extension Jan 15 23:49:29.081294 kernel: alternatives: applying system-wide alternatives Jan 15 23:49:29.081299 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 15 23:49:29.081304 kernel: SVE: maximum available vector length 16 bytes per vector Jan 15 23:49:29.081308 kernel: SVE: default vector length 16 bytes per vector Jan 15 23:49:29.081314 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 15 23:49:29.081320 kernel: devtmpfs: initialized Jan 15 23:49:29.081325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 23:49:29.081330 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 23:49:29.081335 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 23:49:29.081340 kernel: 0 pages in range for non-PLT usage Jan 15 23:49:29.081345 kernel: 508400 pages in range for PLT usage Jan 15 23:49:29.081350 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 23:49:29.081354 kernel: SMBIOS 3.1.0 present. Jan 15 23:49:29.081361 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 15 23:49:29.081366 kernel: DMI: Memory slots populated: 2/2 Jan 15 23:49:29.081371 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 23:49:29.081376 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 23:49:29.081380 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 23:49:29.081385 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 23:49:29.081390 kernel: audit: initializing netlink subsys (disabled) Jan 15 23:49:29.081395 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 15 23:49:29.081400 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 23:49:29.081406 kernel: cpuidle: using governor menu Jan 15 23:49:29.081411 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 23:49:29.081416 kernel: ASID allocator initialised with 32768 entries Jan 15 23:49:29.081421 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 23:49:29.081425 kernel: Serial: AMBA PL011 UART driver Jan 15 23:49:29.081430 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 23:49:29.081435 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 23:49:29.081440 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 23:49:29.081445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 23:49:29.081451 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 23:49:29.081456 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 23:49:29.081461 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 23:49:29.081466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 23:49:29.081470 kernel: ACPI: Added _OSI(Module Device) Jan 15 23:49:29.081475 kernel: ACPI: Added _OSI(Processor Device) Jan 15 23:49:29.081480 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 23:49:29.081485 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 23:49:29.081490 kernel: ACPI: Interpreter enabled Jan 15 23:49:29.081496 kernel: ACPI: Using GIC for interrupt routing Jan 15 23:49:29.081501 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 23:49:29.081505 kernel: printk: legacy console [ttyAMA0] enabled Jan 15 23:49:29.081510 kernel: printk: legacy bootconsole [pl11] disabled Jan 15 23:49:29.081518 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 23:49:29.081523 kernel: ACPI: CPU0 has been hot-added Jan 15 23:49:29.081528 kernel: ACPI: CPU1 has been hot-added Jan 15 23:49:29.081532 kernel: iommu: Default domain type: Translated Jan 15 23:49:29.081537 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 23:49:29.081543 kernel: efivars: Registered efivars operations Jan 15 23:49:29.081548 kernel: vgaarb: loaded Jan 15 23:49:29.081553 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 23:49:29.081558 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 23:49:29.081562 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 23:49:29.081567 kernel: pnp: PnP ACPI init Jan 15 23:49:29.081572 kernel: pnp: PnP ACPI: found 0 devices Jan 15 23:49:29.081577 kernel: NET: Registered PF_INET protocol family Jan 15 23:49:29.081582 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 23:49:29.081587 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 23:49:29.081593 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 23:49:29.081598 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 23:49:29.081603 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 23:49:29.081608 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 23:49:29.081612 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:49:29.081617 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:49:29.081622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 23:49:29.081627 kernel: PCI: CLS 0 bytes, default 64 Jan 15 23:49:29.081631 kernel: kvm [1]: HYP mode not available Jan 15 23:49:29.081637 kernel: Initialise system trusted keyrings Jan 15 23:49:29.081642 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 23:49:29.081647 kernel: Key type asymmetric registered Jan 15 23:49:29.081652 kernel: Asymmetric key parser 'x509' registered Jan 15 23:49:29.081657 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 15 23:49:29.081662 kernel: io scheduler mq-deadline registered Jan 15 23:49:29.081666 kernel: io scheduler kyber registered Jan 15 23:49:29.081671 kernel: io scheduler bfq registered Jan 15 23:49:29.081676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 23:49:29.081682 kernel: thunder_xcv, ver 1.0 Jan 15 23:49:29.081687 kernel: thunder_bgx, ver 1.0 Jan 15 23:49:29.081692 kernel: nicpf, ver 1.0 Jan 15 23:49:29.081696 kernel: nicvf, ver 1.0 Jan 15 23:49:29.081828 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 23:49:29.081880 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-15T23:49:28 UTC (1768520968) Jan 15 23:49:29.081887 kernel: efifb: probing for efifb Jan 15 23:49:29.081893 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 23:49:29.081898 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 23:49:29.081903 kernel: efifb: scrolling: redraw Jan 15 23:49:29.081908 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 23:49:29.081913 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 23:49:29.081918 kernel: fb0: EFI VGA frame buffer device Jan 15 23:49:29.081923 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 23:49:29.081928 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 23:49:29.081932 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 15 23:49:29.081938 kernel: watchdog: NMI not fully supported Jan 15 23:49:29.081943 kernel: watchdog: Hard watchdog permanently disabled Jan 15 23:49:29.081948 kernel: NET: Registered PF_INET6 protocol family Jan 15 23:49:29.081953 kernel: Segment Routing with IPv6 Jan 15 23:49:29.081958 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 23:49:29.081962 kernel: NET: Registered PF_PACKET protocol family Jan 15 23:49:29.081967 kernel: Key type dns_resolver registered Jan 15 23:49:29.081972 kernel: registered taskstats version 1 Jan 15 23:49:29.081977 kernel: Loading compiled-in X.509 certificates Jan 15 23:49:29.081982 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: b110dfc7e70ecac41e34f52a0c530f0543b60d51' Jan 15 23:49:29.081988 kernel: Demotion targets for Node 0: null Jan 15 23:49:29.081993 kernel: Key type .fscrypt registered Jan 15 23:49:29.081997 kernel: Key type fscrypt-provisioning registered Jan 15 23:49:29.082002 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 23:49:29.082007 kernel: ima: Allocated hash algorithm: sha1 Jan 15 23:49:29.082012 kernel: ima: No architecture policies found Jan 15 23:49:29.082016 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 23:49:29.082021 kernel: clk: Disabling unused clocks Jan 15 23:49:29.082026 kernel: PM: genpd: Disabling unused power domains Jan 15 23:49:29.082032 kernel: Warning: unable to open an initial console. Jan 15 23:49:29.082037 kernel: Freeing unused kernel memory: 39552K Jan 15 23:49:29.082042 kernel: Run /init as init process Jan 15 23:49:29.082047 kernel: with arguments: Jan 15 23:49:29.082051 kernel: /init Jan 15 23:49:29.082056 kernel: with environment: Jan 15 23:49:29.082061 kernel: HOME=/ Jan 15 23:49:29.082065 kernel: TERM=linux Jan 15 23:49:29.082071 systemd[1]: Successfully made /usr/ read-only. Jan 15 23:49:29.082080 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:49:29.082085 systemd[1]: Detected virtualization microsoft. Jan 15 23:49:29.082090 systemd[1]: Detected architecture arm64. Jan 15 23:49:29.082108 systemd[1]: Running in initrd. Jan 15 23:49:29.082113 systemd[1]: No hostname configured, using default hostname. Jan 15 23:49:29.082119 systemd[1]: Hostname set to . Jan 15 23:49:29.082124 systemd[1]: Initializing machine ID from random generator. Jan 15 23:49:29.082130 systemd[1]: Queued start job for default target initrd.target. Jan 15 23:49:29.082135 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:49:29.082141 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:49:29.082147 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 23:49:29.082152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:49:29.082157 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 23:49:29.082163 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 23:49:29.082170 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 23:49:29.082175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 23:49:29.082181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:49:29.082186 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:49:29.082191 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:49:29.082196 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:49:29.082201 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:49:29.082206 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:49:29.082212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:49:29.082218 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:49:29.082223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 23:49:29.082228 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 23:49:29.082234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:49:29.082239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:49:29.082244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:49:29.082249 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:49:29.082254 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 23:49:29.082261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:49:29.082266 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 23:49:29.082272 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 23:49:29.082277 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 23:49:29.082282 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:49:29.082288 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:49:29.082306 systemd-journald[225]: Collecting audit messages is disabled. Jan 15 23:49:29.082321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:29.082328 systemd-journald[225]: Journal started Jan 15 23:49:29.082343 systemd-journald[225]: Runtime Journal (/run/log/journal/b8daaa9318bf43028207a7a31bcbaca6) is 8M, max 78.3M, 70.3M free. Jan 15 23:49:29.086306 systemd-modules-load[227]: Inserted module 'overlay' Jan 15 23:49:29.105200 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:49:29.105222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 23:49:29.113292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 23:49:29.124626 kernel: Bridge firewalling registered Jan 15 23:49:29.121178 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 15 23:49:29.125083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:49:29.139432 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 23:49:29.142917 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:49:29.150233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:29.161879 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 23:49:29.176936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:49:29.187949 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 23:49:29.197287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:49:29.217517 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:49:29.223029 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:49:29.230042 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 23:49:29.230493 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 23:49:29.253202 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 23:49:29.261958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:49:29.280040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:49:29.285487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:49:29.308444 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:49:29.334351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:49:29.362549 systemd-resolved[265]: Positive Trust Anchors: Jan 15 23:49:29.365727 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:49:29.386607 kernel: SCSI subsystem initialized Jan 15 23:49:29.365750 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:49:29.422834 kernel: Loading iSCSI transport class v2.0-870. Jan 15 23:49:29.367510 systemd-resolved[265]: Defaulting to hostname 'linux'. Jan 15 23:49:29.368255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:49:29.379032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:49:29.443112 kernel: iscsi: registered transport (tcp) Jan 15 23:49:29.454889 kernel: iscsi: registered transport (qla4xxx) Jan 15 23:49:29.454952 kernel: QLogic iSCSI HBA Driver Jan 15 23:49:29.468770 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:49:29.488150 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:49:29.494719 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:49:29.543808 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 23:49:29.551243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 23:49:29.623113 kernel: raid6: neonx8 gen() 18559 MB/s Jan 15 23:49:29.639101 kernel: raid6: neonx4 gen() 18543 MB/s Jan 15 23:49:29.658101 kernel: raid6: neonx2 gen() 17103 MB/s Jan 15 23:49:29.678100 kernel: raid6: neonx1 gen() 15068 MB/s Jan 15 23:49:29.697123 kernel: raid6: int64x8 gen() 10551 MB/s Jan 15 23:49:29.716100 kernel: raid6: int64x4 gen() 10609 MB/s Jan 15 23:49:29.736100 kernel: raid6: int64x2 gen() 8992 MB/s Jan 15 23:49:29.759026 kernel: raid6: int64x1 gen() 7015 MB/s Jan 15 23:49:29.759139 kernel: raid6: using algorithm neonx8 gen() 18559 MB/s Jan 15 23:49:29.780920 kernel: raid6: .... xor() 14893 MB/s, rmw enabled Jan 15 23:49:29.780999 kernel: raid6: using neon recovery algorithm Jan 15 23:49:29.791124 kernel: xor: measuring software checksum speed Jan 15 23:49:29.791146 kernel: 8regs : 28597 MB/sec Jan 15 23:49:29.793889 kernel: 32regs : 28789 MB/sec Jan 15 23:49:29.797100 kernel: arm64_neon : 34998 MB/sec Jan 15 23:49:29.797108 kernel: xor: using function: arm64_neon (34998 MB/sec) Jan 15 23:49:29.839119 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 23:49:29.844725 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:49:29.855281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:49:29.883226 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 15 23:49:29.890781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:49:29.905031 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 23:49:29.928207 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jan 15 23:49:29.951135 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:49:29.962416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:49:30.004942 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:49:30.021837 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 23:49:30.079416 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:49:30.087425 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 23:49:30.079682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:30.101806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:30.126547 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 23:49:30.126566 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 23:49:30.126573 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 23:49:30.126580 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 23:49:30.126741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:30.148812 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 23:49:30.148836 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 23:49:30.156025 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 23:49:30.156040 kernel: PTP clock support registered Jan 15 23:49:30.136838 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:49:30.170060 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 23:49:30.170080 kernel: hv_vmbus: registering driver hv_utils Jan 15 23:49:30.162324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:49:29.886042 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 23:49:29.888002 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 23:49:29.888018 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 23:49:29.888024 kernel: scsi host0: storvsc_host_t Jan 15 23:49:29.888139 kernel: scsi host1: storvsc_host_t Jan 15 23:49:29.888205 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 23:49:29.888212 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 23:49:29.888217 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 23:49:29.888234 systemd-journald[225]: Time jumped backwards, rotating. Jan 15 23:49:30.162671 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:30.192395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:29.878683 systemd-resolved[265]: Clock change detected. Flushing caches. Jan 15 23:49:29.918977 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 15 23:49:29.933121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:29.966933 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 23:49:29.967156 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 23:49:29.967263 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 23:49:29.967327 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 23:49:29.967397 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 23:49:29.967464 kernel: hv_netvsc 7ced8db9-668a-7ced-8db9-668a7ced8db9 eth0: VF slot 1 added Jan 15 23:49:29.967557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#252 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:49:29.976236 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#195 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:49:29.989427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:49:29.989469 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 23:49:29.994590 kernel: hv_vmbus: registering driver hv_pci Jan 15 23:49:29.994644 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 23:49:30.000600 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 23:49:30.007165 kernel: hv_pci 0b67309e-f65c-4fcc-8679-d615f54bb172: PCI VMBus probing: Using version 0x10004 Jan 15 23:49:30.014931 kernel: hv_pci 0b67309e-f65c-4fcc-8679-d615f54bb172: PCI host bridge to bus f65c:00 Jan 15 23:49:30.016498 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 23:49:30.022275 kernel: pci_bus f65c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 23:49:30.027957 kernel: pci_bus f65c:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 23:49:30.035608 kernel: pci f65c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 15 23:49:30.042494 kernel: pci f65c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 23:49:30.052628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#25 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:49:30.052790 kernel: pci f65c:00:02.0: enabling Extended Tags Jan 15 23:49:30.074539 kernel: pci f65c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f65c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 15 23:49:30.074622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:49:30.087811 kernel: pci_bus f65c:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 23:49:30.094021 kernel: pci f65c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 15 23:49:30.153112 kernel: mlx5_core f65c:00:02.0: enabling device (0000 -> 0002) Jan 15 23:49:30.162282 kernel: mlx5_core f65c:00:02.0: PTM is not supported by PCIe Jan 15 23:49:30.162504 kernel: mlx5_core f65c:00:02.0: firmware version: 16.30.5026 Jan 15 23:49:30.339728 kernel: hv_netvsc 7ced8db9-668a-7ced-8db9-668a7ced8db9 eth0: VF registering: eth1 Jan 15 23:49:30.339928 kernel: mlx5_core f65c:00:02.0 eth1: joined to eth0 Jan 15 23:49:30.347563 kernel: mlx5_core f65c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 23:49:30.363503 kernel: mlx5_core f65c:00:02.0 enP63068s1: renamed from eth1 Jan 15 23:49:30.501447 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 23:49:30.594535 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 23:49:30.631655 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 23:49:30.637289 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 23:49:30.649020 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 23:49:30.721816 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 23:49:30.860125 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 23:49:30.864885 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:49:30.874399 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:49:30.883822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:49:30.893732 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 23:49:30.928100 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:49:31.690871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 15 23:49:31.703591 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 23:49:31.704038 disk-uuid[643]: The operation has completed successfully. Jan 15 23:49:31.778093 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 23:49:31.778190 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 23:49:31.806867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 23:49:31.828989 sh[822]: Success Jan 15 23:49:31.862648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 23:49:31.862694 kernel: device-mapper: uevent: version 1.0.3 Jan 15 23:49:31.868349 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 23:49:31.877519 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 15 23:49:32.134730 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 23:49:32.147013 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 23:49:32.153014 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 23:49:32.188396 kernel: BTRFS: device fsid 4e574c26-9d5a-48bc-a727-ae12db8ee9fc devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (840) Jan 15 23:49:32.188445 kernel: BTRFS info (device dm-0): first mount of filesystem 4e574c26-9d5a-48bc-a727-ae12db8ee9fc Jan 15 23:49:32.193071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:49:32.475909 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 23:49:32.475988 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 23:49:32.549260 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 23:49:32.553264 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:49:32.561629 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 23:49:32.562340 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 23:49:32.584213 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 23:49:32.620551 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (872) Jan 15 23:49:32.625545 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:49:32.632845 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:49:32.657349 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:49:32.657412 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:49:32.667517 kernel: BTRFS info (device sda6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:49:32.668889 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 23:49:32.676888 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 23:49:32.706530 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:49:32.719043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:49:32.749711 systemd-networkd[1009]: lo: Link UP Jan 15 23:49:32.749722 systemd-networkd[1009]: lo: Gained carrier Jan 15 23:49:32.750463 systemd-networkd[1009]: Enumeration completed Jan 15 23:49:32.753603 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:49:32.754060 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:49:32.754064 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:49:32.761428 systemd[1]: Reached target network.target - Network. Jan 15 23:49:32.825505 kernel: mlx5_core f65c:00:02.0 enP63068s1: Link up Jan 15 23:49:32.861509 kernel: hv_netvsc 7ced8db9-668a-7ced-8db9-668a7ced8db9 eth0: Data path switched to VF: enP63068s1 Jan 15 23:49:32.862098 systemd-networkd[1009]: enP63068s1: Link UP Jan 15 23:49:32.862188 systemd-networkd[1009]: eth0: Link UP Jan 15 23:49:32.862316 systemd-networkd[1009]: eth0: Gained carrier Jan 15 23:49:32.862331 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:49:32.880969 systemd-networkd[1009]: enP63068s1: Gained carrier Jan 15 23:49:32.896534 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:49:33.910925 ignition[988]: Ignition 2.22.0 Jan 15 23:49:33.910942 ignition[988]: Stage: fetch-offline Jan 15 23:49:33.915368 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:49:33.911054 ignition[988]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:33.923582 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 23:49:33.911061 ignition[988]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:33.911134 ignition[988]: parsed url from cmdline: "" Jan 15 23:49:33.911137 ignition[988]: no config URL provided Jan 15 23:49:33.911140 ignition[988]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:49:33.911145 ignition[988]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:49:33.911149 ignition[988]: failed to fetch config: resource requires networking Jan 15 23:49:33.911427 ignition[988]: Ignition finished successfully Jan 15 23:49:33.961468 ignition[1019]: Ignition 2.22.0 Jan 15 23:49:33.961474 ignition[1019]: Stage: fetch Jan 15 23:49:33.961689 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:33.961697 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:33.961775 ignition[1019]: parsed url from cmdline: "" Jan 15 23:49:33.961778 ignition[1019]: no config URL provided Jan 15 23:49:33.961782 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:49:33.961791 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:49:33.961806 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 23:49:34.036677 ignition[1019]: GET result: OK Jan 15 23:49:34.036768 ignition[1019]: config has been read from IMDS userdata Jan 15 23:49:34.036799 ignition[1019]: parsing config with SHA512: bf381a20404dd7002c8d927b006234ee52be29c82ef997905904fdcd442238e85c53878e8a7eda1cf2e3f1d8da716fa940ca066b2adfc4337f15f032b49f3428 Jan 15 23:49:34.040350 unknown[1019]: fetched base config from "system" Jan 15 23:49:34.040639 ignition[1019]: fetch: fetch complete Jan 15 23:49:34.040356 unknown[1019]: fetched base config from "system" Jan 15 23:49:34.040642 ignition[1019]: fetch: fetch passed Jan 15 23:49:34.040360 unknown[1019]: fetched user config from "azure" Jan 15 23:49:34.040684 ignition[1019]: Ignition finished successfully Jan 15 23:49:34.042546 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 23:49:34.049577 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 23:49:34.089692 ignition[1026]: Ignition 2.22.0 Jan 15 23:49:34.089706 ignition[1026]: Stage: kargs Jan 15 23:49:34.094040 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 23:49:34.089942 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:34.101408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 23:49:34.089951 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:34.090425 ignition[1026]: kargs: kargs passed Jan 15 23:49:34.090465 ignition[1026]: Ignition finished successfully Jan 15 23:49:34.134328 ignition[1032]: Ignition 2.22.0 Jan 15 23:49:34.134344 ignition[1032]: Stage: disks Jan 15 23:49:34.137875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 23:49:34.134593 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:34.142512 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 23:49:34.134600 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:34.150317 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 23:49:34.135141 ignition[1032]: disks: disks passed Jan 15 23:49:34.160463 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:49:34.135188 ignition[1032]: Ignition finished successfully Jan 15 23:49:34.168853 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:49:34.178588 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:49:34.187684 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 23:49:34.250658 systemd-networkd[1009]: eth0: Gained IPv6LL Jan 15 23:49:34.273314 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 15 23:49:34.281677 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 23:49:34.288971 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 23:49:34.508520 kernel: EXT4-fs (sda9): mounted filesystem e775b4a8-7fa9-4c45-80b7-b5e0f0a5e4b9 r/w with ordered data mode. Quota mode: none. Jan 15 23:49:34.508719 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 23:49:34.512671 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 23:49:34.534858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:49:34.549125 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 23:49:34.558780 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 23:49:34.570255 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 23:49:34.570297 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:49:34.588540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 23:49:34.602468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 23:49:34.619508 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Jan 15 23:49:34.629759 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:49:34.629806 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:49:34.640216 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:49:34.640284 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:49:34.641471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:49:35.146430 coreos-metadata[1056]: Jan 15 23:49:35.146 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 23:49:35.155296 coreos-metadata[1056]: Jan 15 23:49:35.155 INFO Fetch successful Jan 15 23:49:35.159644 coreos-metadata[1056]: Jan 15 23:49:35.155 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 23:49:35.167764 coreos-metadata[1056]: Jan 15 23:49:35.165 INFO Fetch successful Jan 15 23:49:35.182718 coreos-metadata[1056]: Jan 15 23:49:35.182 INFO wrote hostname ci-4459.2.2-n-5fd64d3fe1 to /sysroot/etc/hostname Jan 15 23:49:35.191146 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 23:49:35.691802 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 23:49:35.749403 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Jan 15 23:49:35.772505 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 23:49:35.779998 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 23:49:36.817345 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 23:49:36.822838 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 23:49:36.843050 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 23:49:36.861391 kernel: BTRFS info (device sda6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:49:36.855978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 23:49:36.883631 ignition[1172]: INFO : Ignition 2.22.0 Jan 15 23:49:36.883631 ignition[1172]: INFO : Stage: mount Jan 15 23:49:36.883631 ignition[1172]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:36.883631 ignition[1172]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:36.903880 ignition[1172]: INFO : mount: mount passed Jan 15 23:49:36.903880 ignition[1172]: INFO : Ignition finished successfully Jan 15 23:49:36.889638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 23:49:36.899609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 23:49:36.909162 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 23:49:36.932109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:49:36.953524 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1186) Jan 15 23:49:36.963902 kernel: BTRFS info (device sda6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:49:36.963961 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:49:36.972747 kernel: BTRFS info (device sda6): turning on async discard Jan 15 23:49:36.972814 kernel: BTRFS info (device sda6): enabling free space tree Jan 15 23:49:36.974359 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:49:37.004820 ignition[1203]: INFO : Ignition 2.22.0 Jan 15 23:49:37.004820 ignition[1203]: INFO : Stage: files Jan 15 23:49:37.011316 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:37.011316 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:37.011316 ignition[1203]: DEBUG : files: compiled without relabeling support, skipping Jan 15 23:49:37.028380 ignition[1203]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 23:49:37.028380 ignition[1203]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 23:49:37.122709 ignition[1203]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 23:49:37.128591 ignition[1203]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 23:49:37.128591 ignition[1203]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 23:49:37.123907 unknown[1203]: wrote ssh authorized keys file for user: core Jan 15 23:49:37.161564 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 15 23:49:37.169731 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 15 23:49:37.195949 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 23:49:37.328399 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 15 23:49:37.328399 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:49:37.343091 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:49:37.393837 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:49:37.393837 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:49:37.393837 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:49:37.418684 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:49:37.418684 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:49:37.418684 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 15 23:49:37.931298 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 23:49:38.234405 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:49:38.234405 ignition[1203]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 23:49:38.367619 ignition[1203]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:49:38.398398 ignition[1203]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:49:38.398398 ignition[1203]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 23:49:38.398398 ignition[1203]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 23:49:38.427809 ignition[1203]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 23:49:38.427809 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:49:38.427809 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:49:38.427809 ignition[1203]: INFO : files: files passed Jan 15 23:49:38.427809 ignition[1203]: INFO : Ignition finished successfully Jan 15 23:49:38.407844 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 23:49:38.418141 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 23:49:38.450107 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 23:49:38.467011 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 23:49:38.467108 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 23:49:38.494508 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:49:38.494508 initrd-setup-root-after-ignition[1232]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:49:38.507968 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:49:38.515063 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:49:38.525370 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 23:49:38.534947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 23:49:38.587426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 23:49:38.587570 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 23:49:38.596974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 23:49:38.605910 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 23:49:38.614069 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 23:49:38.614895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 23:49:38.650447 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:49:38.658556 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 23:49:38.682239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:49:38.687712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:49:38.696955 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 23:49:38.705085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 23:49:38.705202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:49:38.717192 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 23:49:38.721586 systemd[1]: Stopped target basic.target - Basic System. Jan 15 23:49:38.729668 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 23:49:38.738397 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:49:38.746542 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 23:49:38.755597 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:49:38.765095 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 23:49:38.773505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:49:38.783132 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 23:49:38.791526 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 23:49:38.800890 systemd[1]: Stopped target swap.target - Swaps. Jan 15 23:49:38.808085 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 23:49:38.808199 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:49:38.819152 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:49:38.823608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:49:38.832388 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 23:49:38.836118 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:49:38.841527 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 23:49:38.841634 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 23:49:38.854382 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 23:49:38.854482 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:49:38.861156 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 23:49:38.861238 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 23:49:38.869688 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 23:49:38.869753 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 23:49:38.881309 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 23:49:38.895099 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 23:49:38.965864 ignition[1256]: INFO : Ignition 2.22.0 Jan 15 23:49:38.965864 ignition[1256]: INFO : Stage: umount Jan 15 23:49:38.965864 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:49:38.965864 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 23:49:38.965864 ignition[1256]: INFO : umount: umount passed Jan 15 23:49:38.965864 ignition[1256]: INFO : Ignition finished successfully Jan 15 23:49:38.895261 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:49:38.915681 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 23:49:38.934270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 23:49:38.934465 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:49:38.949309 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 23:49:38.949416 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:49:38.967419 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 23:49:38.968121 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 23:49:38.968200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 23:49:38.976818 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 23:49:38.977048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 23:49:38.983561 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 23:49:38.983615 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 23:49:38.994127 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 23:49:38.994184 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 23:49:39.004454 systemd[1]: Stopped target network.target - Network. Jan 15 23:49:39.010154 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 23:49:39.010231 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:49:39.018460 systemd[1]: Stopped target paths.target - Path Units. Jan 15 23:49:39.026385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 23:49:39.032974 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:49:39.038372 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 23:49:39.046265 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 23:49:39.054916 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 23:49:39.054960 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:49:39.063592 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 23:49:39.063631 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:49:39.071898 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 23:49:39.071954 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 23:49:39.080016 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 23:49:39.080045 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 23:49:39.087975 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 23:49:39.096128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 23:49:39.106006 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 23:49:39.106098 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 23:49:39.115091 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 23:49:39.302068 kernel: hv_netvsc 7ced8db9-668a-7ced-8db9-668a7ced8db9 eth0: Data path switched from VF: enP63068s1 Jan 15 23:49:39.116088 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 23:49:39.132218 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 15 23:49:39.132603 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 23:49:39.132706 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 23:49:39.144373 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 15 23:49:39.147011 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 23:49:39.153332 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 23:49:39.153376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:49:39.163611 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 23:49:39.177300 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 23:49:39.177389 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:49:39.188392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 23:49:39.188456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:49:39.196921 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 23:49:39.196967 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 23:49:39.201664 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 23:49:39.201704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:49:39.213900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:49:39.222465 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 15 23:49:39.222585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:49:39.244435 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 23:49:39.244728 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 23:49:39.252826 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 23:49:39.252967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:49:39.262662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 23:49:39.262732 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 23:49:39.270698 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 23:49:39.270731 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:49:39.279120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 23:49:39.279166 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:49:39.298245 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 23:49:39.298312 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 23:49:39.310143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 23:49:39.310190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:49:39.320776 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 23:49:39.320831 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 23:49:39.329855 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 23:49:39.338563 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 23:49:39.338626 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:49:39.539599 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 15 23:49:39.348981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 23:49:39.349028 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:49:39.366868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:49:39.366932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:39.377011 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 15 23:49:39.377060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 15 23:49:39.377087 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:49:39.377388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 23:49:39.377480 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 23:49:39.385454 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 23:49:39.385571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 23:49:39.395341 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 23:49:39.405187 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 23:49:39.430388 systemd[1]: Switching root. Jan 15 23:49:39.600243 systemd-journald[225]: Journal stopped Jan 15 23:49:43.950744 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 23:49:43.954554 kernel: SELinux: policy capability open_perms=1 Jan 15 23:49:43.954574 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 23:49:43.954580 kernel: SELinux: policy capability always_check_network=0 Jan 15 23:49:43.954586 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 23:49:43.954597 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 23:49:43.954603 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 23:49:43.954609 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 23:49:43.954614 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 23:49:43.954620 kernel: audit: type=1403 audit(1768520980.648:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 23:49:43.954627 systemd[1]: Successfully loaded SELinux policy in 186.820ms. Jan 15 23:49:43.954636 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.436ms. Jan 15 23:49:43.954643 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:49:43.954652 systemd[1]: Detected virtualization microsoft. Jan 15 23:49:43.954659 systemd[1]: Detected architecture arm64. Jan 15 23:49:43.954665 systemd[1]: Detected first boot. Jan 15 23:49:43.954672 systemd[1]: Hostname set to . Jan 15 23:49:43.954678 systemd[1]: Initializing machine ID from random generator. Jan 15 23:49:43.954684 zram_generator::config[1298]: No configuration found. Jan 15 23:49:43.954691 kernel: NET: Registered PF_VSOCK protocol family Jan 15 23:49:43.954697 systemd[1]: Populated /etc with preset unit settings. Jan 15 23:49:43.954704 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 15 23:49:43.954710 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 23:49:43.954717 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 23:49:43.954724 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 23:49:43.954730 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 23:49:43.954737 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 23:49:43.954743 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 23:49:43.954749 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 23:49:43.954755 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 23:49:43.954762 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 23:49:43.954769 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 23:49:43.954775 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 23:49:43.954781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:49:43.954788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:49:43.954794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 23:49:43.954801 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 23:49:43.954807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 23:49:43.954814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:49:43.954821 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 23:49:43.954828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:49:43.954834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:49:43.954841 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 23:49:43.954847 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 23:49:43.954853 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 23:49:43.954859 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 23:49:43.954866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:49:43.954872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:49:43.954878 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:49:43.954885 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:49:43.954891 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 23:49:43.954897 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 23:49:43.954905 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 23:49:43.954911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:49:43.954917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:49:43.954925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:49:43.954931 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 23:49:43.954937 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 23:49:43.954943 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 23:49:43.954951 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 23:49:43.954957 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 23:49:43.954963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 23:49:43.954969 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 23:49:43.954976 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 23:49:43.954983 systemd[1]: Reached target machines.target - Containers. Jan 15 23:49:43.954989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 23:49:43.954995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:49:43.955003 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:49:43.955009 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 23:49:43.955015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:49:43.955021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:49:43.955028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:49:43.955034 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 23:49:43.955040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:49:43.955047 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 23:49:43.955053 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 23:49:43.955061 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 23:49:43.955067 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 23:49:43.955074 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 23:49:43.955080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:49:43.955087 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:49:43.955093 kernel: fuse: init (API version 7.41) Jan 15 23:49:43.955098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:49:43.955104 kernel: loop: module loaded Jan 15 23:49:43.955111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:49:43.955118 kernel: ACPI: bus type drm_connector registered Jan 15 23:49:43.955124 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 23:49:43.955130 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 23:49:43.955165 systemd-journald[1395]: Collecting audit messages is disabled. Jan 15 23:49:43.955182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:49:43.955190 systemd-journald[1395]: Journal started Jan 15 23:49:43.955206 systemd-journald[1395]: Runtime Journal (/run/log/journal/be06fd1d1f654f3eae1aa3790a2b86ff) is 8M, max 78.3M, 70.3M free. Jan 15 23:49:43.161107 systemd[1]: Queued start job for default target multi-user.target. Jan 15 23:49:43.179089 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 23:49:43.179523 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 23:49:43.179808 systemd[1]: systemd-journald.service: Consumed 2.459s CPU time. Jan 15 23:49:43.968346 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 23:49:43.968425 systemd[1]: Stopped verity-setup.service. Jan 15 23:49:43.981474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:49:43.982168 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 23:49:43.986407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 23:49:43.990982 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 23:49:43.994912 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 23:49:43.999243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 23:49:44.003919 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 23:49:44.008152 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 23:49:44.012974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:49:44.018571 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 23:49:44.018709 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 23:49:44.023718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:49:44.023843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:49:44.028913 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:49:44.029045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:49:44.033313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:49:44.033428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:49:44.038713 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 23:49:44.038846 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 23:49:44.043431 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:49:44.043566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:49:44.048272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:49:44.053946 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:49:44.059668 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 23:49:44.073083 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:49:44.080597 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 23:49:44.093524 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 23:49:44.101045 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 23:49:44.101083 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:49:44.106106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 23:49:44.115703 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 23:49:44.120068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:49:44.121618 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 23:49:44.129505 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 23:49:44.135376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:49:44.136470 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 23:49:44.141097 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:49:44.142530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:49:44.149055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 23:49:44.159678 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 23:49:44.169642 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 23:49:44.178352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:49:44.184740 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 23:49:44.190396 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 23:49:44.191210 systemd-journald[1395]: Time spent on flushing to /var/log/journal/be06fd1d1f654f3eae1aa3790a2b86ff is 53.559ms for 933 entries. Jan 15 23:49:44.191210 systemd-journald[1395]: System Journal (/var/log/journal/be06fd1d1f654f3eae1aa3790a2b86ff) is 11.8M, max 2.6G, 2.6G free. Jan 15 23:49:44.288304 systemd-journald[1395]: Received client request to flush runtime journal. Jan 15 23:49:44.288339 kernel: loop0: detected capacity change from 0 to 207008 Jan 15 23:49:44.288348 systemd-journald[1395]: /var/log/journal/be06fd1d1f654f3eae1aa3790a2b86ff/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 15 23:49:44.288362 systemd-journald[1395]: Rotating system journal. Jan 15 23:49:44.202608 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 23:49:44.217638 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 23:49:44.236723 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 23:49:44.263534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:49:44.290018 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 23:49:44.304506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 23:49:44.329578 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 23:49:44.339656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:49:44.356512 kernel: loop1: detected capacity change from 0 to 119840 Jan 15 23:49:44.378805 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 23:49:44.379435 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 23:49:44.439041 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 15 23:49:44.439054 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 15 23:49:44.442817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:49:44.710511 kernel: loop2: detected capacity change from 0 to 100632 Jan 15 23:49:44.814460 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 23:49:44.820824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:49:44.850532 systemd-udevd[1462]: Using default interface naming scheme 'v255'. Jan 15 23:49:45.082225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:49:45.092643 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:49:45.133591 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 15 23:49:45.151624 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 23:49:45.202586 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 23:49:45.257551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 15 23:49:45.262503 kernel: loop3: detected capacity change from 0 to 27936 Jan 15 23:49:45.270186 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 23:49:45.282517 kernel: hv_vmbus: registering driver hv_balloon Jan 15 23:49:45.290023 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 23:49:45.290107 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 23:49:45.323115 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 23:49:45.323202 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 23:49:45.329507 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 23:49:45.334140 kernel: Console: switching to colour dummy device 80x25 Jan 15 23:49:45.341806 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 23:49:45.367049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:45.399976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:49:45.401370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:45.413718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:45.429471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:49:45.429662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:45.441737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:49:45.504723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 23:49:45.512205 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 23:49:45.521520 kernel: MACsec IEEE 802.1AE Jan 15 23:49:45.542064 systemd-networkd[1477]: lo: Link UP Jan 15 23:49:45.542071 systemd-networkd[1477]: lo: Gained carrier Jan 15 23:49:45.543017 systemd-networkd[1477]: Enumeration completed Jan 15 23:49:45.543140 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:49:45.543261 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:49:45.543269 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:49:45.550470 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 23:49:45.557928 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 23:49:45.569531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 23:49:45.610505 kernel: mlx5_core f65c:00:02.0 enP63068s1: Link up Jan 15 23:49:45.633767 kernel: hv_netvsc 7ced8db9-668a-7ced-8db9-668a7ced8db9 eth0: Data path switched to VF: enP63068s1 Jan 15 23:49:45.635140 systemd-networkd[1477]: enP63068s1: Link UP Jan 15 23:49:45.635318 systemd-networkd[1477]: eth0: Link UP Jan 15 23:49:45.635325 systemd-networkd[1477]: eth0: Gained carrier Jan 15 23:49:45.635343 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:49:45.636947 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 23:49:45.643898 systemd-networkd[1477]: enP63068s1: Gained carrier Jan 15 23:49:45.650832 kernel: loop4: detected capacity change from 0 to 207008 Jan 15 23:49:45.652545 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:49:45.668560 kernel: loop5: detected capacity change from 0 to 119840 Jan 15 23:49:45.682531 kernel: loop6: detected capacity change from 0 to 100632 Jan 15 23:49:45.695552 kernel: loop7: detected capacity change from 0 to 27936 Jan 15 23:49:45.704206 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 23:49:45.704644 (sd-merge)[1608]: Merged extensions into '/usr'. Jan 15 23:49:45.707758 systemd[1]: Reload requested from client PID 1435 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 23:49:45.707774 systemd[1]: Reloading... Jan 15 23:49:45.771518 zram_generator::config[1641]: No configuration found. Jan 15 23:49:45.934336 systemd[1]: Reloading finished in 226 ms. Jan 15 23:49:45.952215 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 23:49:45.958254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:49:45.969567 systemd[1]: Starting ensure-sysext.service... Jan 15 23:49:45.973468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:49:45.990009 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 23:49:45.990034 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 23:49:45.990218 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 23:49:45.990350 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 23:49:45.990795 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 23:49:45.990936 systemd-tmpfiles[1697]: ACLs are not supported, ignoring. Jan 15 23:49:45.990967 systemd-tmpfiles[1697]: ACLs are not supported, ignoring. Jan 15 23:49:45.991555 systemd[1]: Reload requested from client PID 1696 ('systemctl') (unit ensure-sysext.service)... Jan 15 23:49:45.991638 systemd[1]: Reloading... Jan 15 23:49:46.031881 systemd-tmpfiles[1697]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:49:46.031894 systemd-tmpfiles[1697]: Skipping /boot Jan 15 23:49:46.037460 systemd-tmpfiles[1697]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:49:46.037474 systemd-tmpfiles[1697]: Skipping /boot Jan 15 23:49:46.060911 zram_generator::config[1731]: No configuration found. Jan 15 23:49:46.219782 systemd[1]: Reloading finished in 227 ms. Jan 15 23:49:46.236437 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:49:46.249559 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:49:46.269927 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 23:49:46.275269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:49:46.286110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:49:46.294743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:49:46.305710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:49:46.312510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:49:46.312630 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:49:46.324729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 23:49:46.341722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:49:46.347681 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 23:49:46.354903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:49:46.358813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:49:46.364526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:49:46.364675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:49:46.370156 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:49:46.370293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:49:46.381011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:49:46.382761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:49:46.392211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:49:46.404392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:49:46.409909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:49:46.410020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:49:46.412159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:49:46.413537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:49:46.419142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:49:46.419276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:49:46.425763 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:49:46.425906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:49:46.435827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 23:49:46.446612 systemd[1]: Finished ensure-sysext.service. Jan 15 23:49:46.450672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 23:49:46.459045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:49:46.460119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:49:46.473028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:49:46.479606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:49:46.489620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:49:46.494195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:49:46.494237 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:49:46.494277 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 23:49:46.497806 systemd-resolved[1797]: Positive Trust Anchors: Jan 15 23:49:46.498124 systemd-resolved[1797]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:49:46.498149 systemd-resolved[1797]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:49:46.499067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:49:46.500282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:49:46.503474 systemd-resolved[1797]: Using system hostname 'ci-4459.2.2-n-5fd64d3fe1'. Jan 15 23:49:46.506398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:49:46.511810 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:49:46.511977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:49:46.517392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:49:46.517620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:49:46.522589 augenrules[1829]: No rules Jan 15 23:49:46.523677 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:49:46.523844 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:49:46.528037 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:49:46.528169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:49:46.535601 systemd[1]: Reached target network.target - Network. Jan 15 23:49:46.539624 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:49:46.544707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:49:46.544772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:49:46.666700 systemd-networkd[1477]: eth0: Gained IPv6LL Jan 15 23:49:46.669155 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 23:49:46.676670 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 23:49:46.893134 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 23:49:46.899065 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 23:49:49.419318 ldconfig[1430]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 23:49:49.435059 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 23:49:49.441585 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 23:49:49.456541 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 23:49:49.462887 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:49:49.467357 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 23:49:49.472838 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 23:49:49.478277 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 23:49:49.483151 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 23:49:49.488685 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 23:49:49.494120 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 23:49:49.494153 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:49:49.498025 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:49:49.516668 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 23:49:49.522930 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 23:49:49.528428 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 23:49:49.534643 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 23:49:49.539855 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 23:49:49.546339 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 23:49:49.551097 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 23:49:49.556453 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 23:49:49.561006 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:49:49.564928 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:49:49.568573 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:49:49.568598 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:49:49.571117 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 23:49:49.581104 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 23:49:49.590756 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 23:49:49.600151 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 23:49:49.607616 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 23:49:49.617580 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 23:49:49.623613 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 23:49:49.625825 jq[1851]: false Jan 15 23:49:49.628801 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 23:49:49.637652 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 23:49:49.644613 KVP[1856]: KVP starting; pid is:1856 Jan 15 23:49:49.644888 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 23:49:49.646544 chronyd[1846]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 15 23:49:49.647027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:49:49.653622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 23:49:49.663590 kernel: hv_utils: KVP IC version 4.0 Jan 15 23:49:49.663929 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 23:49:49.668253 KVP[1856]: KVP LIC Version: 3.1 Jan 15 23:49:49.670754 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 23:49:49.677590 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 23:49:49.684689 chronyd[1846]: Timezone right/UTC failed leap second check, ignoring Jan 15 23:49:49.685109 chronyd[1846]: Loaded seccomp filter (level 2) Jan 15 23:49:49.686955 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 23:49:49.692382 extend-filesystems[1855]: Found /dev/sda6 Jan 15 23:49:49.699035 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 23:49:49.704176 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 23:49:49.704636 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 23:49:49.705102 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 23:49:49.712411 extend-filesystems[1855]: Found /dev/sda9 Jan 15 23:49:49.720539 extend-filesystems[1855]: Checking size of /dev/sda9 Jan 15 23:49:49.720615 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 23:49:49.729506 jq[1881]: true Jan 15 23:49:49.733421 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 23:49:49.740416 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 23:49:49.749144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 23:49:49.750535 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 23:49:49.752212 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 23:49:49.752354 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 23:49:49.761784 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 23:49:49.765770 extend-filesystems[1855]: Old size kept for /dev/sda9 Jan 15 23:49:49.766644 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 23:49:49.781641 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 23:49:49.782578 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 23:49:49.787135 update_engine[1874]: I20260115 23:49:49.785914 1874 main.cc:92] Flatcar Update Engine starting Jan 15 23:49:49.790375 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 23:49:49.814312 (ntainerd)[1898]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 23:49:49.814706 systemd-logind[1872]: New seat seat0. Jan 15 23:49:49.820256 systemd-logind[1872]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 15 23:49:49.827712 jq[1897]: true Jan 15 23:49:49.820457 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 23:49:49.854844 tar[1888]: linux-arm64/LICENSE Jan 15 23:49:49.854844 tar[1888]: linux-arm64/helm Jan 15 23:49:49.930400 dbus-daemon[1849]: [system] SELinux support is enabled Jan 15 23:49:49.930652 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 23:49:49.938078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 23:49:49.938544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 23:49:49.944210 update_engine[1874]: I20260115 23:49:49.942099 1874 update_check_scheduler.cc:74] Next update check in 11m51s Jan 15 23:49:49.946598 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 23:49:49.946620 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 23:49:49.953457 systemd[1]: Started update-engine.service - Update Engine. Jan 15 23:49:49.953501 dbus-daemon[1849]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 23:49:49.974841 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 23:49:50.001968 bash[1941]: Updated "/home/core/.ssh/authorized_keys" Jan 15 23:49:50.003805 coreos-metadata[1848]: Jan 15 23:49:50.002 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 23:49:50.005662 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 23:49:50.013722 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 23:49:50.015079 coreos-metadata[1848]: Jan 15 23:49:50.014 INFO Fetch successful Jan 15 23:49:50.015079 coreos-metadata[1848]: Jan 15 23:49:50.014 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 23:49:50.020713 coreos-metadata[1848]: Jan 15 23:49:50.020 INFO Fetch successful Jan 15 23:49:50.020713 coreos-metadata[1848]: Jan 15 23:49:50.020 INFO Fetching http://168.63.129.16/machine/51285d08-7113-40d0-be27-181c7d4e12e1/304e03b1%2De1fc%2D466f%2D9738%2De65553559b7b.%5Fci%2D4459.2.2%2Dn%2D5fd64d3fe1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 23:49:50.022634 coreos-metadata[1848]: Jan 15 23:49:50.022 INFO Fetch successful Jan 15 23:49:50.022754 coreos-metadata[1848]: Jan 15 23:49:50.022 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 23:49:50.032662 coreos-metadata[1848]: Jan 15 23:49:50.032 INFO Fetch successful Jan 15 23:49:50.081044 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 23:49:50.090166 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 23:49:50.287450 sshd_keygen[1882]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 23:49:50.317590 locksmithd[1975]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 23:49:50.331850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 23:49:50.338466 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 23:49:50.347284 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 23:49:50.364904 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 23:49:50.365085 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 23:49:50.377101 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 23:49:50.398306 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 23:49:50.417197 containerd[1898]: time="2026-01-15T23:49:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 23:49:50.418184 containerd[1898]: time="2026-01-15T23:49:50.418152196Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 15 23:49:50.427886 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434062628Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.672µs" Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434105308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434121604Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434275300Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434286564Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434307516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434352428Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434358852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434574612Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434586212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434594756Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435640 containerd[1898]: time="2026-01-15T23:49:50.434599668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435875 tar[1888]: linux-arm64/README.md Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.434663996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.434813028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.434849684Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.434861292Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.434889292Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.435075708Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 23:49:50.435913 containerd[1898]: time="2026-01-15T23:49:50.435135324Z" level=info msg="metadata content store policy set" policy=shared Jan 15 23:49:50.440743 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 23:49:50.452832 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 23:49:50.462019 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 23:49:50.463173 containerd[1898]: time="2026-01-15T23:49:50.463127308Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 23:49:50.463327 containerd[1898]: time="2026-01-15T23:49:50.463313156Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 23:49:50.463795 containerd[1898]: time="2026-01-15T23:49:50.463776020Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 23:49:50.463878 containerd[1898]: time="2026-01-15T23:49:50.463864476Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 23:49:50.463927 containerd[1898]: time="2026-01-15T23:49:50.463916436Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 23:49:50.463967 containerd[1898]: time="2026-01-15T23:49:50.463957700Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 23:49:50.464007 containerd[1898]: time="2026-01-15T23:49:50.463998964Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 23:49:50.465509 containerd[1898]: time="2026-01-15T23:49:50.465311900Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 23:49:50.465509 containerd[1898]: time="2026-01-15T23:49:50.465341356Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 23:49:50.465509 containerd[1898]: time="2026-01-15T23:49:50.465350452Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 23:49:50.465509 containerd[1898]: time="2026-01-15T23:49:50.465357948Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 23:49:50.465509 containerd[1898]: time="2026-01-15T23:49:50.465376996Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 23:49:50.465687 containerd[1898]: time="2026-01-15T23:49:50.465669124Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 23:49:50.465771 containerd[1898]: time="2026-01-15T23:49:50.465758348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 23:49:50.465837 containerd[1898]: time="2026-01-15T23:49:50.465826148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 23:49:50.465903 containerd[1898]: time="2026-01-15T23:49:50.465890132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 23:49:50.465945 containerd[1898]: time="2026-01-15T23:49:50.465935220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466535076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466570636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466579076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466588980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466595508Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466604940Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466657652Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466671524Z" level=info msg="Start snapshots syncer" Jan 15 23:49:50.467513 containerd[1898]: time="2026-01-15T23:49:50.466687108Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 23:49:50.467674 containerd[1898]: time="2026-01-15T23:49:50.466894932Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 23:49:50.467674 containerd[1898]: time="2026-01-15T23:49:50.466936780Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.466970996Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467084252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467103484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467110700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467117676Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467125404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467137308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467145364Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467166444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467173644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467180908Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467203548Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467215396Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:49:50.467753 containerd[1898]: time="2026-01-15T23:49:50.467220676Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467227100Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467232108Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467237476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467245020Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467257244Z" level=info msg="runtime interface created" Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467260876Z" level=info msg="created NRI interface" Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467265868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467275052Z" level=info msg="Connect containerd service" Jan 15 23:49:50.467904 containerd[1898]: time="2026-01-15T23:49:50.467290828Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 23:49:50.470342 containerd[1898]: time="2026-01-15T23:49:50.469959900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:49:50.470551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 23:49:50.709158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:49:50.842165 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:49:50.866571 containerd[1898]: time="2026-01-15T23:49:50.866507020Z" level=info msg="Start subscribing containerd event" Jan 15 23:49:50.866571 containerd[1898]: time="2026-01-15T23:49:50.866575932Z" level=info msg="Start recovering state" Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866664100Z" level=info msg="Start event monitor" Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866674884Z" level=info msg="Start cni network conf syncer for default" Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866679692Z" level=info msg="Start streaming server" Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866686604Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866691260Z" level=info msg="runtime interface starting up..." Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866694756Z" level=info msg="starting plugins..." Jan 15 23:49:50.866730 containerd[1898]: time="2026-01-15T23:49:50.866705780Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 23:49:50.867203 containerd[1898]: time="2026-01-15T23:49:50.867178204Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 23:49:50.867230 containerd[1898]: time="2026-01-15T23:49:50.867221700Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 23:49:50.867440 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 23:49:50.872411 containerd[1898]: time="2026-01-15T23:49:50.867460828Z" level=info msg="containerd successfully booted in 0.450445s" Jan 15 23:49:50.875753 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 23:49:50.884182 systemd[1]: Startup finished in 1.625s (kernel) + 12.169s (initrd) + 10.421s (userspace) = 24.216s. Jan 15 23:49:51.143540 login[2027]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 15 23:49:51.144979 login[2028]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:49:51.155786 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 23:49:51.158685 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 23:49:51.163276 systemd-logind[1872]: New session 1 of user core. Jan 15 23:49:51.191560 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 23:49:51.193367 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 23:49:51.219424 (systemd)[2059]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 23:49:51.223057 systemd-logind[1872]: New session c1 of user core. Jan 15 23:49:51.234735 kubelet[2043]: E0115 23:49:51.234681 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:49:51.236750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:49:51.236860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:49:51.238731 systemd[1]: kubelet.service: Consumed 553ms CPU time, 252.9M memory peak. Jan 15 23:49:51.334150 systemd[2059]: Queued start job for default target default.target. Jan 15 23:49:51.348291 systemd[2059]: Created slice app.slice - User Application Slice. Jan 15 23:49:51.348774 systemd[2059]: Reached target paths.target - Paths. Jan 15 23:49:51.348822 systemd[2059]: Reached target timers.target - Timers. Jan 15 23:49:51.350015 systemd[2059]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 23:49:51.357204 systemd[2059]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 23:49:51.357249 systemd[2059]: Reached target sockets.target - Sockets. Jan 15 23:49:51.357280 systemd[2059]: Reached target basic.target - Basic System. Jan 15 23:49:51.357299 systemd[2059]: Reached target default.target - Main User Target. Jan 15 23:49:51.357324 systemd[2059]: Startup finished in 127ms. Jan 15 23:49:51.357428 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 23:49:51.359717 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 23:49:52.128188 waagent[2021]: 2026-01-15T23:49:52.128106Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 15 23:49:52.132865 waagent[2021]: 2026-01-15T23:49:52.132801Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 15 23:49:52.137038 waagent[2021]: 2026-01-15T23:49:52.136995Z INFO Daemon Daemon Python: 3.11.13 Jan 15 23:49:52.140660 waagent[2021]: 2026-01-15T23:49:52.140617Z INFO Daemon Daemon Run daemon Jan 15 23:49:52.144143 waagent[2021]: 2026-01-15T23:49:52.144100Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 15 23:49:52.151813 waagent[2021]: 2026-01-15T23:49:52.151474Z INFO Daemon Daemon Using waagent for provisioning Jan 15 23:49:52.151620 login[2027]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:49:52.155910 waagent[2021]: 2026-01-15T23:49:52.155801Z INFO Daemon Daemon Activate resource disk Jan 15 23:49:52.159692 waagent[2021]: 2026-01-15T23:49:52.159644Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 23:49:52.167838 systemd-logind[1872]: New session 2 of user core. Jan 15 23:49:52.170691 waagent[2021]: 2026-01-15T23:49:52.170641Z INFO Daemon Daemon Found device: None Jan 15 23:49:52.173968 waagent[2021]: 2026-01-15T23:49:52.173927Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 23:49:52.180445 waagent[2021]: 2026-01-15T23:49:52.180411Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 23:49:52.188345 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 23:49:52.190552 waagent[2021]: 2026-01-15T23:49:52.189507Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 23:49:52.194167 waagent[2021]: 2026-01-15T23:49:52.194122Z INFO Daemon Daemon Running default provisioning handler Jan 15 23:49:52.207173 waagent[2021]: 2026-01-15T23:49:52.207106Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 23:49:52.218318 waagent[2021]: 2026-01-15T23:49:52.217866Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 23:49:52.225050 waagent[2021]: 2026-01-15T23:49:52.225002Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 23:49:52.229039 waagent[2021]: 2026-01-15T23:49:52.229004Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 23:49:52.308636 waagent[2021]: 2026-01-15T23:49:52.308553Z INFO Daemon Daemon Successfully mounted dvd Jan 15 23:49:52.336474 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 23:49:52.338419 waagent[2021]: 2026-01-15T23:49:52.338357Z INFO Daemon Daemon Detect protocol endpoint Jan 15 23:49:52.342059 waagent[2021]: 2026-01-15T23:49:52.342017Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 23:49:52.346317 waagent[2021]: 2026-01-15T23:49:52.346284Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 23:49:52.350972 waagent[2021]: 2026-01-15T23:49:52.350942Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 23:49:52.354952 waagent[2021]: 2026-01-15T23:49:52.354921Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 23:49:52.358640 waagent[2021]: 2026-01-15T23:49:52.358611Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 23:49:52.408937 waagent[2021]: 2026-01-15T23:49:52.408849Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 23:49:52.413753 waagent[2021]: 2026-01-15T23:49:52.413731Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 23:49:52.417742 waagent[2021]: 2026-01-15T23:49:52.417714Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 23:49:52.581529 waagent[2021]: 2026-01-15T23:49:52.580949Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 23:49:52.585960 waagent[2021]: 2026-01-15T23:49:52.585907Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 23:49:52.593788 waagent[2021]: 2026-01-15T23:49:52.593744Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 23:49:52.611561 waagent[2021]: 2026-01-15T23:49:52.611523Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 15 23:49:52.615863 waagent[2021]: 2026-01-15T23:49:52.615828Z INFO Daemon Jan 15 23:49:52.617904 waagent[2021]: 2026-01-15T23:49:52.617875Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2e040519-ae86-45f4-8929-b5a0c3c8daf7 eTag: 9977404105834630483 source: Fabric] Jan 15 23:49:52.627286 waagent[2021]: 2026-01-15T23:49:52.627251Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 23:49:52.632323 waagent[2021]: 2026-01-15T23:49:52.632287Z INFO Daemon Jan 15 23:49:52.634478 waagent[2021]: 2026-01-15T23:49:52.634450Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 23:49:52.643512 waagent[2021]: 2026-01-15T23:49:52.643468Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 23:49:52.710577 waagent[2021]: 2026-01-15T23:49:52.710422Z INFO Daemon Downloaded certificate {'thumbprint': '0B451FDAB88FF30CA0D7F2804B4A9C83D3562B65', 'hasPrivateKey': True} Jan 15 23:49:52.717891 waagent[2021]: 2026-01-15T23:49:52.717849Z INFO Daemon Fetch goal state completed Jan 15 23:49:52.728028 waagent[2021]: 2026-01-15T23:49:52.727976Z INFO Daemon Daemon Starting provisioning Jan 15 23:49:52.731797 waagent[2021]: 2026-01-15T23:49:52.731763Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 23:49:52.736353 waagent[2021]: 2026-01-15T23:49:52.736324Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-5fd64d3fe1] Jan 15 23:49:52.742312 waagent[2021]: 2026-01-15T23:49:52.742267Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-5fd64d3fe1] Jan 15 23:49:52.747637 waagent[2021]: 2026-01-15T23:49:52.747596Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 23:49:52.752549 waagent[2021]: 2026-01-15T23:49:52.752515Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 23:49:52.762166 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:49:52.762173 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:49:52.762206 systemd-networkd[1477]: eth0: DHCP lease lost Jan 15 23:49:52.763113 waagent[2021]: 2026-01-15T23:49:52.763066Z INFO Daemon Daemon Create user account if not exists Jan 15 23:49:52.767405 waagent[2021]: 2026-01-15T23:49:52.767364Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 23:49:52.774535 waagent[2021]: 2026-01-15T23:49:52.771594Z INFO Daemon Daemon Configure sudoer Jan 15 23:49:52.780139 waagent[2021]: 2026-01-15T23:49:52.780090Z INFO Daemon Daemon Configure sshd Jan 15 23:49:52.787471 waagent[2021]: 2026-01-15T23:49:52.787419Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 23:49:52.787597 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.20.29/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 23:49:52.797776 waagent[2021]: 2026-01-15T23:49:52.797720Z INFO Daemon Daemon Deploy ssh public key. Jan 15 23:49:53.914844 waagent[2021]: 2026-01-15T23:49:53.914798Z INFO Daemon Daemon Provisioning complete Jan 15 23:49:53.929597 waagent[2021]: 2026-01-15T23:49:53.929557Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 23:49:53.935731 waagent[2021]: 2026-01-15T23:49:53.935695Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 23:49:53.943637 waagent[2021]: 2026-01-15T23:49:53.943604Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 15 23:49:54.045536 waagent[2110]: 2026-01-15T23:49:54.044933Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 15 23:49:54.045536 waagent[2110]: 2026-01-15T23:49:54.045074Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 15 23:49:54.045536 waagent[2110]: 2026-01-15T23:49:54.045111Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 15 23:49:54.045536 waagent[2110]: 2026-01-15T23:49:54.045146Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 15 23:49:54.096873 waagent[2110]: 2026-01-15T23:49:54.096806Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 15 23:49:54.097219 waagent[2110]: 2026-01-15T23:49:54.097188Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:49:54.097356 waagent[2110]: 2026-01-15T23:49:54.097331Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:49:54.104431 waagent[2110]: 2026-01-15T23:49:54.103686Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 23:49:54.109262 waagent[2110]: 2026-01-15T23:49:54.109229Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 15 23:49:54.109812 waagent[2110]: 2026-01-15T23:49:54.109779Z INFO ExtHandler Jan 15 23:49:54.109957 waagent[2110]: 2026-01-15T23:49:54.109932Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0dc597a0-e02c-4df3-8647-89b6c4caae83 eTag: 9977404105834630483 source: Fabric] Jan 15 23:49:54.110264 waagent[2110]: 2026-01-15T23:49:54.110236Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 23:49:54.110805 waagent[2110]: 2026-01-15T23:49:54.110774Z INFO ExtHandler Jan 15 23:49:54.110929 waagent[2110]: 2026-01-15T23:49:54.110907Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 23:49:54.114718 waagent[2110]: 2026-01-15T23:49:54.114692Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 23:49:54.172001 waagent[2110]: 2026-01-15T23:49:54.171885Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B451FDAB88FF30CA0D7F2804B4A9C83D3562B65', 'hasPrivateKey': True} Jan 15 23:49:54.172607 waagent[2110]: 2026-01-15T23:49:54.172563Z INFO ExtHandler Fetch goal state completed Jan 15 23:49:54.186525 waagent[2110]: 2026-01-15T23:49:54.186023Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 15 23:49:54.190007 waagent[2110]: 2026-01-15T23:49:54.189962Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2110 Jan 15 23:49:54.190134 waagent[2110]: 2026-01-15T23:49:54.190107Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 23:49:54.190411 waagent[2110]: 2026-01-15T23:49:54.190382Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 15 23:49:54.191609 waagent[2110]: 2026-01-15T23:49:54.191574Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 23:49:54.191935 waagent[2110]: 2026-01-15T23:49:54.191904Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 15 23:49:54.192051 waagent[2110]: 2026-01-15T23:49:54.192027Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 15 23:49:54.192476 waagent[2110]: 2026-01-15T23:49:54.192445Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 23:49:54.266441 waagent[2110]: 2026-01-15T23:49:54.266397Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 23:49:54.266651 waagent[2110]: 2026-01-15T23:49:54.266621Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 23:49:54.271534 waagent[2110]: 2026-01-15T23:49:54.271189Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 23:49:54.276060 systemd[1]: Reload requested from client PID 2125 ('systemctl') (unit waagent.service)... Jan 15 23:49:54.276268 systemd[1]: Reloading... Jan 15 23:49:54.346580 zram_generator::config[2164]: No configuration found. Jan 15 23:49:54.498835 systemd[1]: Reloading finished in 222 ms. Jan 15 23:49:54.509869 waagent[2110]: 2026-01-15T23:49:54.509799Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 23:49:54.509976 waagent[2110]: 2026-01-15T23:49:54.509950Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 23:49:55.210507 waagent[2110]: 2026-01-15T23:49:55.209721Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 23:49:55.210507 waagent[2110]: 2026-01-15T23:49:55.210028Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 15 23:49:55.210821 waagent[2110]: 2026-01-15T23:49:55.210706Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:49:55.210821 waagent[2110]: 2026-01-15T23:49:55.210777Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:49:55.210962 waagent[2110]: 2026-01-15T23:49:55.210929Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 23:49:55.211057 waagent[2110]: 2026-01-15T23:49:55.211011Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 23:49:55.211228 waagent[2110]: 2026-01-15T23:49:55.211195Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 23:49:55.211228 waagent[2110]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 23:49:55.211228 waagent[2110]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 23:49:55.211228 waagent[2110]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 23:49:55.211228 waagent[2110]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:49:55.211228 waagent[2110]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:49:55.211228 waagent[2110]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 23:49:55.211621 waagent[2110]: 2026-01-15T23:49:55.211583Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 23:49:55.212112 waagent[2110]: 2026-01-15T23:49:55.212079Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 23:49:55.212151 waagent[2110]: 2026-01-15T23:49:55.212125Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 23:49:55.212219 waagent[2110]: 2026-01-15T23:49:55.212189Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 23:49:55.212244 waagent[2110]: 2026-01-15T23:49:55.212227Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 23:49:55.212389 waagent[2110]: 2026-01-15T23:49:55.212361Z INFO EnvHandler ExtHandler Configure routes Jan 15 23:49:55.212688 waagent[2110]: 2026-01-15T23:49:55.212653Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 23:49:55.212819 waagent[2110]: 2026-01-15T23:49:55.212789Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 23:49:55.213092 waagent[2110]: 2026-01-15T23:49:55.213064Z INFO EnvHandler ExtHandler Gateway:None Jan 15 23:49:55.213202 waagent[2110]: 2026-01-15T23:49:55.213182Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 23:49:55.213653 waagent[2110]: 2026-01-15T23:49:55.213631Z INFO EnvHandler ExtHandler Routes:None Jan 15 23:49:55.220734 waagent[2110]: 2026-01-15T23:49:55.219449Z INFO ExtHandler ExtHandler Jan 15 23:49:55.220734 waagent[2110]: 2026-01-15T23:49:55.219534Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 07bffe52-ad81-403d-bd92-30020615775d correlation e4677147-16fd-498c-b484-caeea74b6b19 created: 2026-01-15T23:48:56.935913Z] Jan 15 23:49:55.220734 waagent[2110]: 2026-01-15T23:49:55.219797Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 23:49:55.220734 waagent[2110]: 2026-01-15T23:49:55.220185Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 15 23:49:55.259349 waagent[2110]: 2026-01-15T23:49:55.259288Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 15 23:49:55.259349 waagent[2110]: Try `iptables -h' or 'iptables --help' for more information.) Jan 15 23:49:55.259732 waagent[2110]: 2026-01-15T23:49:55.259703Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DEE057DB-3A65-400D-8F6F-6352A58FE07B;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 15 23:49:55.276268 waagent[2110]: 2026-01-15T23:49:55.276202Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 23:49:55.276268 waagent[2110]: Executing ['ip', '-a', '-o', 'link']: Jan 15 23:49:55.276268 waagent[2110]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 23:49:55.276268 waagent[2110]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b9:66:8a brd ff:ff:ff:ff:ff:ff Jan 15 23:49:55.276268 waagent[2110]: 3: enP63068s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b9:66:8a brd ff:ff:ff:ff:ff:ff\ altname enP63068p0s2 Jan 15 23:49:55.276268 waagent[2110]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 23:49:55.276268 waagent[2110]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 23:49:55.276268 waagent[2110]: 2: eth0 inet 10.200.20.29/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 23:49:55.276268 waagent[2110]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 23:49:55.276268 waagent[2110]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 23:49:55.276268 waagent[2110]: 2: eth0 inet6 fe80::7eed:8dff:feb9:668a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 23:49:55.320021 waagent[2110]: 2026-01-15T23:49:55.319283Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 15 23:49:55.320021 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:49:55.320021 waagent[2110]: pkts bytes target prot opt in out source destination Jan 15 23:49:55.320021 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:49:55.320021 waagent[2110]: pkts bytes target prot opt in out source destination Jan 15 23:49:55.320021 waagent[2110]: Chain OUTPUT (policy ACCEPT 8 packets, 996 bytes) Jan 15 23:49:55.320021 waagent[2110]: pkts bytes target prot opt in out source destination Jan 15 23:49:55.320021 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 23:49:55.320021 waagent[2110]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 23:49:55.320021 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 23:49:55.321849 waagent[2110]: 2026-01-15T23:49:55.321813Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 23:49:55.321849 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:49:55.321849 waagent[2110]: pkts bytes target prot opt in out source destination Jan 15 23:49:55.321849 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 23:49:55.321849 waagent[2110]: pkts bytes target prot opt in out source destination Jan 15 23:49:55.321849 waagent[2110]: Chain OUTPUT (policy ACCEPT 8 packets, 996 bytes) Jan 15 23:49:55.321849 waagent[2110]: pkts bytes target prot opt in out source destination Jan 15 23:49:55.321849 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 23:49:55.321849 waagent[2110]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 23:49:55.321849 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 23:49:55.322252 waagent[2110]: 2026-01-15T23:49:55.322228Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 23:50:01.488669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 23:50:01.490104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:01.601882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:01.608932 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:50:01.705997 kubelet[2260]: E0115 23:50:01.705873 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:50:01.708767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:50:01.708885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:50:01.709167 systemd[1]: kubelet.service: Consumed 116ms CPU time, 106.2M memory peak. Jan 15 23:50:11.959280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 23:50:11.960953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:12.052577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:12.055573 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:50:12.196332 kubelet[2276]: E0115 23:50:12.196282 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:50:12.198607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:50:12.198719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:50:12.199219 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.2M memory peak. Jan 15 23:50:13.139241 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 23:50:13.141200 systemd[1]: Started sshd@0-10.200.20.29:22-10.200.16.10:41848.service - OpenSSH per-connection server daemon (10.200.16.10:41848). Jan 15 23:50:13.476643 chronyd[1846]: Selected source PHC0 Jan 15 23:50:13.804840 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 41848 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:13.805598 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:13.809291 systemd-logind[1872]: New session 3 of user core. Jan 15 23:50:13.819783 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 23:50:14.218753 systemd[1]: Started sshd@1-10.200.20.29:22-10.200.16.10:41850.service - OpenSSH per-connection server daemon (10.200.16.10:41850). Jan 15 23:50:14.647332 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 41850 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:14.648103 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:14.651474 systemd-logind[1872]: New session 4 of user core. Jan 15 23:50:14.658790 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 23:50:14.956452 sshd[2292]: Connection closed by 10.200.16.10 port 41850 Jan 15 23:50:14.956237 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:14.960270 systemd[1]: sshd@1-10.200.20.29:22-10.200.16.10:41850.service: Deactivated successfully. Jan 15 23:50:14.961950 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 23:50:14.962793 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. Jan 15 23:50:14.964668 systemd-logind[1872]: Removed session 4. Jan 15 23:50:15.043690 systemd[1]: Started sshd@2-10.200.20.29:22-10.200.16.10:41860.service - OpenSSH per-connection server daemon (10.200.16.10:41860). Jan 15 23:50:15.506673 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 41860 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:15.507791 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:15.511263 systemd-logind[1872]: New session 5 of user core. Jan 15 23:50:15.521639 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 23:50:15.836607 sshd[2301]: Connection closed by 10.200.16.10 port 41860 Jan 15 23:50:15.837075 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:15.839884 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. Jan 15 23:50:15.840003 systemd[1]: sshd@2-10.200.20.29:22-10.200.16.10:41860.service: Deactivated successfully. Jan 15 23:50:15.841391 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 23:50:15.843181 systemd-logind[1872]: Removed session 5. Jan 15 23:50:15.914055 systemd[1]: Started sshd@3-10.200.20.29:22-10.200.16.10:41870.service - OpenSSH per-connection server daemon (10.200.16.10:41870). Jan 15 23:50:16.347822 sshd[2307]: Accepted publickey for core from 10.200.16.10 port 41870 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:16.348918 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:16.352502 systemd-logind[1872]: New session 6 of user core. Jan 15 23:50:16.359793 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 23:50:16.673773 sshd[2310]: Connection closed by 10.200.16.10 port 41870 Jan 15 23:50:16.674476 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:16.678253 systemd[1]: sshd@3-10.200.20.29:22-10.200.16.10:41870.service: Deactivated successfully. Jan 15 23:50:16.680006 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 23:50:16.681319 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. Jan 15 23:50:16.682268 systemd-logind[1872]: Removed session 6. Jan 15 23:50:16.752771 systemd[1]: Started sshd@4-10.200.20.29:22-10.200.16.10:41876.service - OpenSSH per-connection server daemon (10.200.16.10:41876). Jan 15 23:50:17.184395 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 41876 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:17.185278 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:17.189219 systemd-logind[1872]: New session 7 of user core. Jan 15 23:50:17.196652 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 23:50:17.556926 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 23:50:17.557140 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:50:17.597180 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 15 23:50:17.680751 sshd[2319]: Connection closed by 10.200.16.10 port 41876 Jan 15 23:50:17.680628 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:17.685057 systemd[1]: sshd@4-10.200.20.29:22-10.200.16.10:41876.service: Deactivated successfully. Jan 15 23:50:17.686899 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 23:50:17.688292 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. Jan 15 23:50:17.689929 systemd-logind[1872]: Removed session 7. Jan 15 23:50:17.759736 systemd[1]: Started sshd@5-10.200.20.29:22-10.200.16.10:41892.service - OpenSSH per-connection server daemon (10.200.16.10:41892). Jan 15 23:50:18.203236 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 41892 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:18.204002 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:18.207531 systemd-logind[1872]: New session 8 of user core. Jan 15 23:50:18.216611 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 23:50:18.446161 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 23:50:18.446705 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:50:18.461475 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 15 23:50:18.465772 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 23:50:18.465980 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:50:18.472854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:50:18.504199 augenrules[2353]: No rules Jan 15 23:50:18.505397 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:50:18.505626 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:50:18.507115 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 15 23:50:18.591688 sshd[2329]: Connection closed by 10.200.16.10 port 41892 Jan 15 23:50:18.591604 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:18.596045 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. Jan 15 23:50:18.596708 systemd[1]: sshd@5-10.200.20.29:22-10.200.16.10:41892.service: Deactivated successfully. Jan 15 23:50:18.598063 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 23:50:18.599511 systemd-logind[1872]: Removed session 8. Jan 15 23:50:18.674121 systemd[1]: Started sshd@6-10.200.20.29:22-10.200.16.10:41894.service - OpenSSH per-connection server daemon (10.200.16.10:41894). Jan 15 23:50:19.140728 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 41894 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:50:19.141788 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:19.145181 systemd-logind[1872]: New session 9 of user core. Jan 15 23:50:19.152623 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 23:50:19.402354 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 23:50:19.402592 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:50:20.640930 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 23:50:20.652771 (dockerd)[2383]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 23:50:21.542524 dockerd[2383]: time="2026-01-15T23:50:21.541634780Z" level=info msg="Starting up" Jan 15 23:50:21.543633 dockerd[2383]: time="2026-01-15T23:50:21.543550355Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 23:50:21.554271 dockerd[2383]: time="2026-01-15T23:50:21.554226789Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 23:50:21.665703 systemd[1]: var-lib-docker-metacopy\x2dcheck1489554780-merged.mount: Deactivated successfully. Jan 15 23:50:21.685438 dockerd[2383]: time="2026-01-15T23:50:21.685256286Z" level=info msg="Loading containers: start." Jan 15 23:50:21.723530 kernel: Initializing XFRM netlink socket Jan 15 23:50:22.000332 systemd-networkd[1477]: docker0: Link UP Jan 15 23:50:22.023115 dockerd[2383]: time="2026-01-15T23:50:22.023020039Z" level=info msg="Loading containers: done." Jan 15 23:50:22.047162 dockerd[2383]: time="2026-01-15T23:50:22.046849172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 23:50:22.047162 dockerd[2383]: time="2026-01-15T23:50:22.046932595Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 23:50:22.047162 dockerd[2383]: time="2026-01-15T23:50:22.047016139Z" level=info msg="Initializing buildkit" Jan 15 23:50:22.141673 dockerd[2383]: time="2026-01-15T23:50:22.141631688Z" level=info msg="Completed buildkit initialization" Jan 15 23:50:22.147042 dockerd[2383]: time="2026-01-15T23:50:22.146992709Z" level=info msg="Daemon has completed initialization" Jan 15 23:50:22.147285 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 23:50:22.147775 dockerd[2383]: time="2026-01-15T23:50:22.147731819Z" level=info msg="API listen on /run/docker.sock" Jan 15 23:50:22.227684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 23:50:22.229685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:22.362524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:22.365547 (kubelet)[2598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:50:22.440257 kubelet[2598]: E0115 23:50:22.440200 2598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:50:22.442446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:50:22.442572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:50:22.443124 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.1M memory peak. Jan 15 23:50:23.003572 containerd[1898]: time="2026-01-15T23:50:23.003532174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 23:50:23.933409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645875930.mount: Deactivated successfully. Jan 15 23:50:25.096326 containerd[1898]: time="2026-01-15T23:50:25.096267643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:25.100788 containerd[1898]: time="2026-01-15T23:50:25.100749493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 15 23:50:25.106236 containerd[1898]: time="2026-01-15T23:50:25.106207745Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:25.112120 containerd[1898]: time="2026-01-15T23:50:25.112084263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:25.112708 containerd[1898]: time="2026-01-15T23:50:25.112545683Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.108973882s" Jan 15 23:50:25.112708 containerd[1898]: time="2026-01-15T23:50:25.112581332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 15 23:50:25.113252 containerd[1898]: time="2026-01-15T23:50:25.113231936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 23:50:26.400048 containerd[1898]: time="2026-01-15T23:50:26.399993235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:26.404374 containerd[1898]: time="2026-01-15T23:50:26.404037778Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 15 23:50:26.408320 containerd[1898]: time="2026-01-15T23:50:26.408280657Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:26.413053 containerd[1898]: time="2026-01-15T23:50:26.413023550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:26.413649 containerd[1898]: time="2026-01-15T23:50:26.413620520Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.300295931s" Jan 15 23:50:26.413649 containerd[1898]: time="2026-01-15T23:50:26.413649929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 15 23:50:26.414242 containerd[1898]: time="2026-01-15T23:50:26.414221930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 23:50:27.618521 containerd[1898]: time="2026-01-15T23:50:27.618404109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:27.625587 containerd[1898]: time="2026-01-15T23:50:27.625531185Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 15 23:50:27.634255 containerd[1898]: time="2026-01-15T23:50:27.634015552Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:27.638911 containerd[1898]: time="2026-01-15T23:50:27.638878690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:27.639406 containerd[1898]: time="2026-01-15T23:50:27.639377535Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.225068194s" Jan 15 23:50:27.639406 containerd[1898]: time="2026-01-15T23:50:27.639407449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 15 23:50:27.639910 containerd[1898]: time="2026-01-15T23:50:27.639831843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 23:50:29.013465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814159948.mount: Deactivated successfully. Jan 15 23:50:29.278597 containerd[1898]: time="2026-01-15T23:50:29.278398387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:29.281635 containerd[1898]: time="2026-01-15T23:50:29.281598045Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 15 23:50:29.284957 containerd[1898]: time="2026-01-15T23:50:29.284911108Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:29.289290 containerd[1898]: time="2026-01-15T23:50:29.289242439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:29.289643 containerd[1898]: time="2026-01-15T23:50:29.289481898Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.649626205s" Jan 15 23:50:29.289643 containerd[1898]: time="2026-01-15T23:50:29.289525932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 15 23:50:29.290010 containerd[1898]: time="2026-01-15T23:50:29.289989752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 23:50:30.027103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547125664.mount: Deactivated successfully. Jan 15 23:50:30.988533 containerd[1898]: time="2026-01-15T23:50:30.988124584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:30.994999 containerd[1898]: time="2026-01-15T23:50:30.994963198Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 15 23:50:30.998482 containerd[1898]: time="2026-01-15T23:50:30.998453797Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:31.005949 containerd[1898]: time="2026-01-15T23:50:31.005916062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:31.006679 containerd[1898]: time="2026-01-15T23:50:31.006555162Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.716533248s" Jan 15 23:50:31.006679 containerd[1898]: time="2026-01-15T23:50:31.006590659Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 15 23:50:31.007048 containerd[1898]: time="2026-01-15T23:50:31.007019350Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 23:50:31.614765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720458134.mount: Deactivated successfully. Jan 15 23:50:31.646534 containerd[1898]: time="2026-01-15T23:50:31.646203603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:50:31.649794 containerd[1898]: time="2026-01-15T23:50:31.649629791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 15 23:50:31.653517 containerd[1898]: time="2026-01-15T23:50:31.653492453Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:50:31.662034 containerd[1898]: time="2026-01-15T23:50:31.661968939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:50:31.662540 containerd[1898]: time="2026-01-15T23:50:31.662390013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 655.340869ms" Jan 15 23:50:31.662540 containerd[1898]: time="2026-01-15T23:50:31.662422206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 15 23:50:31.663389 containerd[1898]: time="2026-01-15T23:50:31.663361903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 23:50:32.353655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330145405.mount: Deactivated successfully. Jan 15 23:50:32.477720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 23:50:32.479878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:32.575607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:32.579125 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:50:32.680576 kubelet[2750]: E0115 23:50:32.680419 2750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:50:32.682641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:50:32.682756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:50:32.683023 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107M memory peak. Jan 15 23:50:33.437195 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 23:50:35.672526 update_engine[1874]: I20260115 23:50:35.672070 1874 update_attempter.cc:509] Updating boot flags... Jan 15 23:50:35.697321 containerd[1898]: time="2026-01-15T23:50:35.696616860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:35.698944 containerd[1898]: time="2026-01-15T23:50:35.698909943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 15 23:50:35.704229 containerd[1898]: time="2026-01-15T23:50:35.704030131Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:35.710084 containerd[1898]: time="2026-01-15T23:50:35.710050047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:35.711446 containerd[1898]: time="2026-01-15T23:50:35.710954238Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.047561669s" Jan 15 23:50:35.711577 containerd[1898]: time="2026-01-15T23:50:35.711554424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 15 23:50:37.969116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:37.969700 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107M memory peak. Jan 15 23:50:37.971474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:37.992534 systemd[1]: Reload requested from client PID 2893 ('systemctl') (unit session-9.scope)... Jan 15 23:50:37.992547 systemd[1]: Reloading... Jan 15 23:50:38.092516 zram_generator::config[2955]: No configuration found. Jan 15 23:50:38.230773 systemd[1]: Reloading finished in 237 ms. Jan 15 23:50:38.267265 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 23:50:38.267339 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 23:50:38.267839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:38.267906 systemd[1]: kubelet.service: Consumed 78ms CPU time, 95.1M memory peak. Jan 15 23:50:38.269189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:38.505047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:38.512750 (kubelet)[3008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:50:38.540891 kubelet[3008]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:50:38.542482 kubelet[3008]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:50:38.542482 kubelet[3008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:50:38.542482 kubelet[3008]: I0115 23:50:38.541333 3008 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:50:38.726866 kubelet[3008]: I0115 23:50:38.726830 3008 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 23:50:38.727009 kubelet[3008]: I0115 23:50:38.727000 3008 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:50:38.727289 kubelet[3008]: I0115 23:50:38.727273 3008 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 23:50:38.748863 kubelet[3008]: E0115 23:50:38.748815 3008 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:50:38.750403 kubelet[3008]: I0115 23:50:38.750372 3008 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:50:38.755627 kubelet[3008]: I0115 23:50:38.755196 3008 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:50:38.760568 kubelet[3008]: I0115 23:50:38.760304 3008 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:50:38.761034 kubelet[3008]: I0115 23:50:38.760997 3008 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:50:38.761219 kubelet[3008]: I0115 23:50:38.761091 3008 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-5fd64d3fe1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:50:38.761347 kubelet[3008]: I0115 23:50:38.761337 3008 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:50:38.761402 kubelet[3008]: I0115 23:50:38.761395 3008 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 23:50:38.761591 kubelet[3008]: I0115 23:50:38.761578 3008 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:50:38.764050 kubelet[3008]: I0115 23:50:38.764033 3008 kubelet.go:446] "Attempting to sync node with API server" Jan 15 23:50:38.764144 kubelet[3008]: I0115 23:50:38.764132 3008 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:50:38.764200 kubelet[3008]: I0115 23:50:38.764193 3008 kubelet.go:352] "Adding apiserver pod source" Jan 15 23:50:38.764252 kubelet[3008]: I0115 23:50:38.764244 3008 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:50:38.767044 kubelet[3008]: W0115 23:50:38.766910 3008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-5fd64d3fe1&limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 15 23:50:38.767044 kubelet[3008]: E0115 23:50:38.766955 3008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-5fd64d3fe1&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:50:38.767472 kubelet[3008]: W0115 23:50:38.767248 3008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 15 23:50:38.767472 kubelet[3008]: E0115 23:50:38.767284 3008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:50:38.767576 kubelet[3008]: I0115 23:50:38.767556 3008 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:50:38.767892 kubelet[3008]: I0115 23:50:38.767870 3008 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 23:50:38.767932 kubelet[3008]: W0115 23:50:38.767924 3008 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 23:50:38.769238 kubelet[3008]: I0115 23:50:38.769213 3008 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:50:38.769301 kubelet[3008]: I0115 23:50:38.769250 3008 server.go:1287] "Started kubelet" Jan 15 23:50:38.774517 kubelet[3008]: E0115 23:50:38.772577 3008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-5fd64d3fe1.188b0c83c5a51ebf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-5fd64d3fe1,UID:ci-4459.2.2-n-5fd64d3fe1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-5fd64d3fe1,},FirstTimestamp:2026-01-15 23:50:38.769233599 +0000 UTC m=+0.253564517,LastTimestamp:2026-01-15 23:50:38.769233599 +0000 UTC m=+0.253564517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-5fd64d3fe1,}" Jan 15 23:50:38.774517 kubelet[3008]: I0115 23:50:38.772742 3008 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:50:38.774517 kubelet[3008]: I0115 23:50:38.773017 3008 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:50:38.774517 kubelet[3008]: I0115 23:50:38.773074 3008 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:50:38.774517 kubelet[3008]: I0115 23:50:38.773185 3008 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:50:38.774517 kubelet[3008]: I0115 23:50:38.773655 3008 server.go:479] "Adding debug handlers to kubelet server" Jan 15 23:50:38.776120 kubelet[3008]: I0115 23:50:38.776088 3008 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:50:38.777970 kubelet[3008]: I0115 23:50:38.777935 3008 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:50:38.778147 kubelet[3008]: E0115 23:50:38.778131 3008 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:50:38.778225 kubelet[3008]: E0115 23:50:38.778198 3008 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" Jan 15 23:50:38.779357 kubelet[3008]: I0115 23:50:38.779330 3008 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:50:38.779415 kubelet[3008]: I0115 23:50:38.779385 3008 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:50:38.779796 kubelet[3008]: W0115 23:50:38.779760 3008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 15 23:50:38.779877 kubelet[3008]: E0115 23:50:38.779799 3008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:50:38.779877 kubelet[3008]: E0115 23:50:38.779841 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-5fd64d3fe1?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="200ms" Jan 15 23:50:38.779999 kubelet[3008]: I0115 23:50:38.779980 3008 factory.go:221] Registration of the systemd container factory successfully Jan 15 23:50:38.780064 kubelet[3008]: I0115 23:50:38.780049 3008 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:50:38.780887 kubelet[3008]: I0115 23:50:38.780852 3008 factory.go:221] Registration of the containerd container factory successfully Jan 15 23:50:38.800472 kubelet[3008]: I0115 23:50:38.800212 3008 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:50:38.800472 kubelet[3008]: I0115 23:50:38.800230 3008 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:50:38.800472 kubelet[3008]: I0115 23:50:38.800249 3008 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:50:38.879053 kubelet[3008]: E0115 23:50:38.879016 3008 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" Jan 15 23:50:38.980221 kubelet[3008]: E0115 23:50:38.980177 3008 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" Jan 15 23:50:38.980574 kubelet[3008]: E0115 23:50:38.980544 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-5fd64d3fe1?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="400ms" Jan 15 23:50:39.036581 kubelet[3008]: I0115 23:50:39.036305 3008 policy_none.go:49] "None policy: Start" Jan 15 23:50:39.036581 kubelet[3008]: I0115 23:50:39.036339 3008 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:50:39.036581 kubelet[3008]: I0115 23:50:39.036352 3008 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:50:39.043699 kubelet[3008]: I0115 23:50:39.043660 3008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 23:50:39.045200 kubelet[3008]: I0115 23:50:39.044949 3008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 23:50:39.045200 kubelet[3008]: I0115 23:50:39.044972 3008 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 23:50:39.045200 kubelet[3008]: I0115 23:50:39.044992 3008 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:50:39.045200 kubelet[3008]: I0115 23:50:39.044997 3008 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 23:50:39.045200 kubelet[3008]: E0115 23:50:39.045040 3008 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:50:39.047685 kubelet[3008]: W0115 23:50:39.047659 3008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 15 23:50:39.047759 kubelet[3008]: E0115 23:50:39.047694 3008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:50:39.051655 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 23:50:39.060873 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 23:50:39.063671 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 23:50:39.074133 kubelet[3008]: I0115 23:50:39.074108 3008 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 23:50:39.074449 kubelet[3008]: I0115 23:50:39.074435 3008 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:50:39.074570 kubelet[3008]: I0115 23:50:39.074538 3008 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:50:39.075091 kubelet[3008]: I0115 23:50:39.074857 3008 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:50:39.076381 kubelet[3008]: E0115 23:50:39.076365 3008 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:50:39.076569 kubelet[3008]: E0115 23:50:39.076466 3008 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-5fd64d3fe1\" not found" Jan 15 23:50:39.155260 systemd[1]: Created slice kubepods-burstable-pod0663975cfeb22367298dd1a11cc0c044.slice - libcontainer container kubepods-burstable-pod0663975cfeb22367298dd1a11cc0c044.slice. Jan 15 23:50:39.167271 kubelet[3008]: E0115 23:50:39.167217 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.169002 systemd[1]: Created slice kubepods-burstable-pode4712a96fcc68bf297c19c31e5dff3df.slice - libcontainer container kubepods-burstable-pode4712a96fcc68bf297c19c31e5dff3df.slice. Jan 15 23:50:39.170501 kubelet[3008]: E0115 23:50:39.170442 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.172693 systemd[1]: Created slice kubepods-burstable-podbbbd2c6aaf08ebf667129cdf35a350dc.slice - libcontainer container kubepods-burstable-podbbbd2c6aaf08ebf667129cdf35a350dc.slice. Jan 15 23:50:39.174024 kubelet[3008]: E0115 23:50:39.173888 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.176727 kubelet[3008]: I0115 23:50:39.176712 3008 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.177209 kubelet[3008]: E0115 23:50:39.177171 3008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181499 kubelet[3008]: I0115 23:50:39.181452 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181602 kubelet[3008]: I0115 23:50:39.181479 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181807 kubelet[3008]: I0115 23:50:39.181658 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbbd2c6aaf08ebf667129cdf35a350dc-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"bbbd2c6aaf08ebf667129cdf35a350dc\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181807 kubelet[3008]: I0115 23:50:39.181673 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0663975cfeb22367298dd1a11cc0c044-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"0663975cfeb22367298dd1a11cc0c044\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181807 kubelet[3008]: I0115 23:50:39.181684 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0663975cfeb22367298dd1a11cc0c044-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"0663975cfeb22367298dd1a11cc0c044\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181807 kubelet[3008]: I0115 23:50:39.181695 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0663975cfeb22367298dd1a11cc0c044-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"0663975cfeb22367298dd1a11cc0c044\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181807 kubelet[3008]: I0115 23:50:39.181706 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181900 kubelet[3008]: I0115 23:50:39.181717 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.181900 kubelet[3008]: I0115 23:50:39.181729 3008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.380178 kubelet[3008]: I0115 23:50:39.379620 3008 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.380178 kubelet[3008]: E0115 23:50:39.379915 3008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.381718 kubelet[3008]: E0115 23:50:39.381685 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-5fd64d3fe1?timeout=10s\": dial tcp 10.200.20.29:6443: connect: connection refused" interval="800ms" Jan 15 23:50:39.469332 containerd[1898]: time="2026-01-15T23:50:39.468760776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-5fd64d3fe1,Uid:0663975cfeb22367298dd1a11cc0c044,Namespace:kube-system,Attempt:0,}" Jan 15 23:50:39.471607 containerd[1898]: time="2026-01-15T23:50:39.471398835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1,Uid:e4712a96fcc68bf297c19c31e5dff3df,Namespace:kube-system,Attempt:0,}" Jan 15 23:50:39.475477 containerd[1898]: time="2026-01-15T23:50:39.475324238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-5fd64d3fe1,Uid:bbbd2c6aaf08ebf667129cdf35a350dc,Namespace:kube-system,Attempt:0,}" Jan 15 23:50:39.598720 containerd[1898]: time="2026-01-15T23:50:39.598659330Z" level=info msg="connecting to shim 69f5fb2a98d268cea7e4b35c6ea41ef4afae96d88dd80119c5c5bda6c90a1f45" address="unix:///run/containerd/s/4b6d39a7194072898403e0a92d6d3b0f8211e0e29ecedcc5d65802a81c410be1" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:39.607608 containerd[1898]: time="2026-01-15T23:50:39.607533789Z" level=info msg="connecting to shim 4277c54643ec5135888718e63b7e793d662c0c4e538fd04bc5a549cc88720401" address="unix:///run/containerd/s/7b1840f693f24de406b3c1447fc441359a2afcf15d6502a162208e4ce6dd105e" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:39.612039 containerd[1898]: time="2026-01-15T23:50:39.611677210Z" level=info msg="connecting to shim 50bcaf2852650e0d895e1e6843df59961aa0b92eb81c7849275eefccd08a02bb" address="unix:///run/containerd/s/f2491322858ecef6bdf1a76926e97b92e74f11bfd526ba204a17cdd8a0314388" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:39.633670 systemd[1]: Started cri-containerd-69f5fb2a98d268cea7e4b35c6ea41ef4afae96d88dd80119c5c5bda6c90a1f45.scope - libcontainer container 69f5fb2a98d268cea7e4b35c6ea41ef4afae96d88dd80119c5c5bda6c90a1f45. Jan 15 23:50:39.637779 systemd[1]: Started cri-containerd-4277c54643ec5135888718e63b7e793d662c0c4e538fd04bc5a549cc88720401.scope - libcontainer container 4277c54643ec5135888718e63b7e793d662c0c4e538fd04bc5a549cc88720401. Jan 15 23:50:39.642387 systemd[1]: Started cri-containerd-50bcaf2852650e0d895e1e6843df59961aa0b92eb81c7849275eefccd08a02bb.scope - libcontainer container 50bcaf2852650e0d895e1e6843df59961aa0b92eb81c7849275eefccd08a02bb. Jan 15 23:50:39.681506 kubelet[3008]: W0115 23:50:39.681172 3008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.29:6443: connect: connection refused Jan 15 23:50:39.682499 kubelet[3008]: E0115 23:50:39.682446 3008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.29:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:50:39.691689 containerd[1898]: time="2026-01-15T23:50:39.691550286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-5fd64d3fe1,Uid:0663975cfeb22367298dd1a11cc0c044,Namespace:kube-system,Attempt:0,} returns sandbox id \"69f5fb2a98d268cea7e4b35c6ea41ef4afae96d88dd80119c5c5bda6c90a1f45\"" Jan 15 23:50:39.695024 containerd[1898]: time="2026-01-15T23:50:39.694987076Z" level=info msg="CreateContainer within sandbox \"69f5fb2a98d268cea7e4b35c6ea41ef4afae96d88dd80119c5c5bda6c90a1f45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 23:50:39.697112 containerd[1898]: time="2026-01-15T23:50:39.697038765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1,Uid:e4712a96fcc68bf297c19c31e5dff3df,Namespace:kube-system,Attempt:0,} returns sandbox id \"4277c54643ec5135888718e63b7e793d662c0c4e538fd04bc5a549cc88720401\"" Jan 15 23:50:39.700342 containerd[1898]: time="2026-01-15T23:50:39.700301452Z" level=info msg="CreateContainer within sandbox \"4277c54643ec5135888718e63b7e793d662c0c4e538fd04bc5a549cc88720401\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 23:50:39.705508 containerd[1898]: time="2026-01-15T23:50:39.705361528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-5fd64d3fe1,Uid:bbbd2c6aaf08ebf667129cdf35a350dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"50bcaf2852650e0d895e1e6843df59961aa0b92eb81c7849275eefccd08a02bb\"" Jan 15 23:50:39.708343 containerd[1898]: time="2026-01-15T23:50:39.708083583Z" level=info msg="CreateContainer within sandbox \"50bcaf2852650e0d895e1e6843df59961aa0b92eb81c7849275eefccd08a02bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 23:50:39.735814 containerd[1898]: time="2026-01-15T23:50:39.735765583Z" level=info msg="Container 1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:39.741310 containerd[1898]: time="2026-01-15T23:50:39.741268127Z" level=info msg="Container ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:39.748770 containerd[1898]: time="2026-01-15T23:50:39.748728908Z" level=info msg="Container 886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:39.780538 containerd[1898]: time="2026-01-15T23:50:39.780200961Z" level=info msg="CreateContainer within sandbox \"4277c54643ec5135888718e63b7e793d662c0c4e538fd04bc5a549cc88720401\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89\"" Jan 15 23:50:39.782049 containerd[1898]: time="2026-01-15T23:50:39.782017720Z" level=info msg="StartContainer for \"1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89\"" Jan 15 23:50:39.782991 kubelet[3008]: I0115 23:50:39.782707 3008 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.783179 containerd[1898]: time="2026-01-15T23:50:39.783143673Z" level=info msg="connecting to shim 1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89" address="unix:///run/containerd/s/7b1840f693f24de406b3c1447fc441359a2afcf15d6502a162208e4ce6dd105e" protocol=ttrpc version=3 Jan 15 23:50:39.783390 kubelet[3008]: E0115 23:50:39.783147 3008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.29:6443/api/v1/nodes\": dial tcp 10.200.20.29:6443: connect: connection refused" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:39.788520 containerd[1898]: time="2026-01-15T23:50:39.788474474Z" level=info msg="CreateContainer within sandbox \"69f5fb2a98d268cea7e4b35c6ea41ef4afae96d88dd80119c5c5bda6c90a1f45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7\"" Jan 15 23:50:39.789343 containerd[1898]: time="2026-01-15T23:50:39.789275173Z" level=info msg="StartContainer for \"ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7\"" Jan 15 23:50:39.790814 containerd[1898]: time="2026-01-15T23:50:39.790768022Z" level=info msg="connecting to shim ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7" address="unix:///run/containerd/s/4b6d39a7194072898403e0a92d6d3b0f8211e0e29ecedcc5d65802a81c410be1" protocol=ttrpc version=3 Jan 15 23:50:39.792473 containerd[1898]: time="2026-01-15T23:50:39.792444159Z" level=info msg="CreateContainer within sandbox \"50bcaf2852650e0d895e1e6843df59961aa0b92eb81c7849275eefccd08a02bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0\"" Jan 15 23:50:39.793688 containerd[1898]: time="2026-01-15T23:50:39.793661660Z" level=info msg="StartContainer for \"886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0\"" Jan 15 23:50:39.794363 containerd[1898]: time="2026-01-15T23:50:39.794336921Z" level=info msg="connecting to shim 886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0" address="unix:///run/containerd/s/f2491322858ecef6bdf1a76926e97b92e74f11bfd526ba204a17cdd8a0314388" protocol=ttrpc version=3 Jan 15 23:50:39.808641 systemd[1]: Started cri-containerd-1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89.scope - libcontainer container 1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89. Jan 15 23:50:39.812342 systemd[1]: Started cri-containerd-886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0.scope - libcontainer container 886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0. Jan 15 23:50:39.820752 systemd[1]: Started cri-containerd-ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7.scope - libcontainer container ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7. Jan 15 23:50:39.882719 containerd[1898]: time="2026-01-15T23:50:39.882610564Z" level=info msg="StartContainer for \"ec41900f3bff1aba8452e3b16ac98282404879fe8f2da8ccda1b2cc8ef81daf7\" returns successfully" Jan 15 23:50:39.883631 containerd[1898]: time="2026-01-15T23:50:39.883594447Z" level=info msg="StartContainer for \"1d45979d29356d0349e7622c1f769a26c1317783c1b7b7c93bf7e8875b0f6d89\" returns successfully" Jan 15 23:50:39.892978 containerd[1898]: time="2026-01-15T23:50:39.892467458Z" level=info msg="StartContainer for \"886e402b5262d79ecdb5b4dc10e90a58dae3a727766148e2b8aab21577e680a0\" returns successfully" Jan 15 23:50:40.059297 kubelet[3008]: E0115 23:50:40.058836 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:40.066205 kubelet[3008]: E0115 23:50:40.066175 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:40.066514 kubelet[3008]: E0115 23:50:40.066477 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:40.586468 kubelet[3008]: I0115 23:50:40.586437 3008 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.066447 kubelet[3008]: E0115 23:50:41.066294 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.067287 kubelet[3008]: E0115 23:50:41.067268 3008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.320328 kubelet[3008]: E0115 23:50:41.319981 3008 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-n-5fd64d3fe1\" not found" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.530147 kubelet[3008]: I0115 23:50:41.530104 3008 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.530147 kubelet[3008]: E0115 23:50:41.530150 3008 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-n-5fd64d3fe1\": node \"ci-4459.2.2-n-5fd64d3fe1\" not found" Jan 15 23:50:41.582048 kubelet[3008]: I0115 23:50:41.581517 3008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.589073 kubelet[3008]: E0115 23:50:41.588852 3008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.589073 kubelet[3008]: I0115 23:50:41.588879 3008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.591707 kubelet[3008]: E0115 23:50:41.591684 3008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-5fd64d3fe1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.592015 kubelet[3008]: I0115 23:50:41.591820 3008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.593235 kubelet[3008]: E0115 23:50:41.593214 3008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:41.767954 kubelet[3008]: I0115 23:50:41.767916 3008 apiserver.go:52] "Watching apiserver" Jan 15 23:50:41.779446 kubelet[3008]: I0115 23:50:41.779413 3008 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:50:42.066270 kubelet[3008]: I0115 23:50:42.066240 3008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:42.068036 kubelet[3008]: E0115 23:50:42.068001 3008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:43.366530 waagent[2110]: 2026-01-15T23:50:43.366375Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 15 23:50:43.373100 waagent[2110]: 2026-01-15T23:50:43.373059Z INFO ExtHandler Jan 15 23:50:43.373177 waagent[2110]: 2026-01-15T23:50:43.373156Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: dbf24fd9-c6ba-42a9-bb09-29fcd0eb014d eTag: 16297162205115905977 source: Fabric] Jan 15 23:50:43.373474 waagent[2110]: 2026-01-15T23:50:43.373447Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 23:50:43.373990 waagent[2110]: 2026-01-15T23:50:43.373958Z INFO ExtHandler Jan 15 23:50:43.374032 waagent[2110]: 2026-01-15T23:50:43.374015Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 15 23:50:43.422128 waagent[2110]: 2026-01-15T23:50:43.422078Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 23:50:43.479384 waagent[2110]: 2026-01-15T23:50:43.479308Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B451FDAB88FF30CA0D7F2804B4A9C83D3562B65', 'hasPrivateKey': True} Jan 15 23:50:43.479810 waagent[2110]: 2026-01-15T23:50:43.479774Z INFO ExtHandler Fetch goal state completed Jan 15 23:50:43.480102 waagent[2110]: 2026-01-15T23:50:43.480073Z INFO ExtHandler ExtHandler Jan 15 23:50:43.480147 waagent[2110]: 2026-01-15T23:50:43.480129Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: d8b195a5-f319-4d52-b6d6-9de49cf5e2d1 correlation e4677147-16fd-498c-b484-caeea74b6b19 created: 2026-01-15T23:50:33.389366Z] Jan 15 23:50:43.480359 waagent[2110]: 2026-01-15T23:50:43.480335Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 23:50:43.480753 waagent[2110]: 2026-01-15T23:50:43.480725Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 15 23:50:43.563512 systemd[1]: Reload requested from client PID 3285 ('systemctl') (unit session-9.scope)... Jan 15 23:50:43.563815 systemd[1]: Reloading... Jan 15 23:50:43.660522 zram_generator::config[3335]: No configuration found. Jan 15 23:50:43.809172 systemd[1]: Reloading finished in 245 ms. Jan 15 23:50:43.831614 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:43.847508 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 23:50:43.847768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:43.847850 systemd[1]: kubelet.service: Consumed 511ms CPU time, 124.9M memory peak. Jan 15 23:50:43.850696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:50:43.973830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:50:43.982848 (kubelet)[3396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:50:44.093167 kubelet[3396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:50:44.093167 kubelet[3396]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:50:44.093167 kubelet[3396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:50:44.093167 kubelet[3396]: I0115 23:50:44.092922 3396 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:50:44.098700 kubelet[3396]: I0115 23:50:44.098661 3396 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 23:50:44.098700 kubelet[3396]: I0115 23:50:44.098692 3396 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:50:44.100313 kubelet[3396]: I0115 23:50:44.099292 3396 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 23:50:44.101769 kubelet[3396]: I0115 23:50:44.101749 3396 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 23:50:44.105319 kubelet[3396]: I0115 23:50:44.105294 3396 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:50:44.108906 kubelet[3396]: I0115 23:50:44.108890 3396 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:50:44.112894 kubelet[3396]: I0115 23:50:44.112858 3396 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:50:44.113510 kubelet[3396]: I0115 23:50:44.113185 3396 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:50:44.113510 kubelet[3396]: I0115 23:50:44.113216 3396 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-5fd64d3fe1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:50:44.113510 kubelet[3396]: I0115 23:50:44.113352 3396 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:50:44.113510 kubelet[3396]: I0115 23:50:44.113360 3396 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 23:50:44.113686 kubelet[3396]: I0115 23:50:44.113399 3396 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:50:44.113686 kubelet[3396]: I0115 23:50:44.113549 3396 kubelet.go:446] "Attempting to sync node with API server" Jan 15 23:50:44.113686 kubelet[3396]: I0115 23:50:44.113563 3396 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:50:44.113686 kubelet[3396]: I0115 23:50:44.113583 3396 kubelet.go:352] "Adding apiserver pod source" Jan 15 23:50:44.114075 kubelet[3396]: I0115 23:50:44.114053 3396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:50:44.119252 kubelet[3396]: I0115 23:50:44.119198 3396 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:50:44.119723 kubelet[3396]: I0115 23:50:44.119701 3396 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 23:50:44.120742 kubelet[3396]: I0115 23:50:44.120713 3396 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:50:44.120884 kubelet[3396]: I0115 23:50:44.120874 3396 server.go:1287] "Started kubelet" Jan 15 23:50:44.123915 kubelet[3396]: I0115 23:50:44.123893 3396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:50:44.125710 kubelet[3396]: I0115 23:50:44.125252 3396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:50:44.125710 kubelet[3396]: I0115 23:50:44.125353 3396 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:50:44.127769 kubelet[3396]: I0115 23:50:44.127750 3396 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:50:44.128089 kubelet[3396]: E0115 23:50:44.128072 3396 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-5fd64d3fe1\" not found" Jan 15 23:50:44.128332 kubelet[3396]: I0115 23:50:44.128320 3396 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:50:44.128396 kubelet[3396]: I0115 23:50:44.127779 3396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:50:44.128628 kubelet[3396]: I0115 23:50:44.128614 3396 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:50:44.128812 kubelet[3396]: I0115 23:50:44.128801 3396 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:50:44.129952 kubelet[3396]: I0115 23:50:44.129925 3396 server.go:479] "Adding debug handlers to kubelet server" Jan 15 23:50:44.134165 kubelet[3396]: I0115 23:50:44.134102 3396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 23:50:44.137870 kubelet[3396]: I0115 23:50:44.137573 3396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 23:50:44.137870 kubelet[3396]: I0115 23:50:44.137600 3396 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 23:50:44.137870 kubelet[3396]: I0115 23:50:44.137618 3396 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:50:44.137870 kubelet[3396]: I0115 23:50:44.137623 3396 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 23:50:44.137870 kubelet[3396]: E0115 23:50:44.137665 3396 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:50:44.144959 kubelet[3396]: I0115 23:50:44.144927 3396 factory.go:221] Registration of the containerd container factory successfully Jan 15 23:50:44.144959 kubelet[3396]: I0115 23:50:44.144948 3396 factory.go:221] Registration of the systemd container factory successfully Jan 15 23:50:44.145087 kubelet[3396]: I0115 23:50:44.145029 3396 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:50:44.182973 kubelet[3396]: I0115 23:50:44.182905 3396 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:50:44.182973 kubelet[3396]: I0115 23:50:44.182936 3396 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:50:44.182973 kubelet[3396]: I0115 23:50:44.182964 3396 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:50:44.183147 kubelet[3396]: I0115 23:50:44.183120 3396 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 23:50:44.183147 kubelet[3396]: I0115 23:50:44.183128 3396 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 23:50:44.183147 kubelet[3396]: I0115 23:50:44.183142 3396 policy_none.go:49] "None policy: Start" Jan 15 23:50:44.183194 kubelet[3396]: I0115 23:50:44.183149 3396 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:50:44.183194 kubelet[3396]: I0115 23:50:44.183157 3396 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:50:44.183239 kubelet[3396]: I0115 23:50:44.183224 3396 state_mem.go:75] "Updated machine memory state" Jan 15 23:50:44.186822 kubelet[3396]: I0115 23:50:44.186798 3396 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 23:50:44.186967 kubelet[3396]: I0115 23:50:44.186949 3396 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:50:44.187013 kubelet[3396]: I0115 23:50:44.186964 3396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:50:44.187521 kubelet[3396]: I0115 23:50:44.187455 3396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:50:44.188379 kubelet[3396]: E0115 23:50:44.188350 3396 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:50:44.239528 kubelet[3396]: I0115 23:50:44.238269 3396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.239528 kubelet[3396]: I0115 23:50:44.238343 3396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.239528 kubelet[3396]: I0115 23:50:44.238540 3396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.250184 kubelet[3396]: W0115 23:50:44.250149 3396 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:50:44.265810 kubelet[3396]: W0115 23:50:44.265773 3396 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:50:44.266360 kubelet[3396]: W0115 23:50:44.266074 3396 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:50:44.290045 kubelet[3396]: I0115 23:50:44.289900 3396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.304379 kubelet[3396]: I0115 23:50:44.304328 3396 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.304699 kubelet[3396]: I0115 23:50:44.304597 3396 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330361 kubelet[3396]: I0115 23:50:44.330175 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbbd2c6aaf08ebf667129cdf35a350dc-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"bbbd2c6aaf08ebf667129cdf35a350dc\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330361 kubelet[3396]: I0115 23:50:44.330212 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330361 kubelet[3396]: I0115 23:50:44.330228 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330361 kubelet[3396]: I0115 23:50:44.330238 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330361 kubelet[3396]: I0115 23:50:44.330255 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330619 kubelet[3396]: I0115 23:50:44.330265 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0663975cfeb22367298dd1a11cc0c044-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"0663975cfeb22367298dd1a11cc0c044\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330619 kubelet[3396]: I0115 23:50:44.330280 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0663975cfeb22367298dd1a11cc0c044-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"0663975cfeb22367298dd1a11cc0c044\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330619 kubelet[3396]: I0115 23:50:44.330289 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0663975cfeb22367298dd1a11cc0c044-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"0663975cfeb22367298dd1a11cc0c044\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:44.330619 kubelet[3396]: I0115 23:50:44.330300 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4712a96fcc68bf297c19c31e5dff3df-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1\" (UID: \"e4712a96fcc68bf297c19c31e5dff3df\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:45.119188 kubelet[3396]: I0115 23:50:45.119106 3396 apiserver.go:52] "Watching apiserver" Jan 15 23:50:45.129208 kubelet[3396]: I0115 23:50:45.129157 3396 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:50:45.173142 kubelet[3396]: I0115 23:50:45.173100 3396 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:45.181771 kubelet[3396]: W0115 23:50:45.181515 3396 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 23:50:45.181771 kubelet[3396]: E0115 23:50:45.181579 3396 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-5fd64d3fe1\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:50:45.210241 kubelet[3396]: I0115 23:50:45.210101 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-5fd64d3fe1" podStartSLOduration=1.2100851 podStartE2EDuration="1.2100851s" podCreationTimestamp="2026-01-15 23:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:50:45.199452788 +0000 UTC m=+1.210881474" watchObservedRunningTime="2026-01-15 23:50:45.2100851 +0000 UTC m=+1.221513786" Jan 15 23:50:45.220674 kubelet[3396]: I0115 23:50:45.220071 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-5fd64d3fe1" podStartSLOduration=1.220056239 podStartE2EDuration="1.220056239s" podCreationTimestamp="2026-01-15 23:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:50:45.210011112 +0000 UTC m=+1.221439798" watchObservedRunningTime="2026-01-15 23:50:45.220056239 +0000 UTC m=+1.231484933" Jan 15 23:50:45.236513 kubelet[3396]: I0115 23:50:45.236331 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-5fd64d3fe1" podStartSLOduration=1.236312428 podStartE2EDuration="1.236312428s" podCreationTimestamp="2026-01-15 23:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:50:45.22100352 +0000 UTC m=+1.232432214" watchObservedRunningTime="2026-01-15 23:50:45.236312428 +0000 UTC m=+1.247741146" Jan 15 23:50:49.552991 kubelet[3396]: I0115 23:50:49.552954 3396 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 23:50:49.555376 containerd[1898]: time="2026-01-15T23:50:49.555184485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 23:50:49.556232 kubelet[3396]: I0115 23:50:49.555877 3396 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 23:50:50.536671 kubelet[3396]: I0115 23:50:50.536622 3396 status_manager.go:890] "Failed to get status for pod" podUID="c9032264-db30-4ddb-a454-3a08b6f1855d" pod="kube-system/kube-proxy-wxtdm" err="pods \"kube-proxy-wxtdm\" is forbidden: User \"system:node:ci-4459.2.2-n-5fd64d3fe1\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object" Jan 15 23:50:50.536808 kubelet[3396]: W0115 23:50:50.536708 3396 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4459.2.2-n-5fd64d3fe1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object Jan 15 23:50:50.536808 kubelet[3396]: E0115 23:50:50.536733 3396 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4459.2.2-n-5fd64d3fe1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object" logger="UnhandledError" Jan 15 23:50:50.543077 systemd[1]: Created slice kubepods-besteffort-podc9032264_db30_4ddb_a454_3a08b6f1855d.slice - libcontainer container kubepods-besteffort-podc9032264_db30_4ddb_a454_3a08b6f1855d.slice. Jan 15 23:50:50.566524 kubelet[3396]: I0115 23:50:50.566469 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9032264-db30-4ddb-a454-3a08b6f1855d-lib-modules\") pod \"kube-proxy-wxtdm\" (UID: \"c9032264-db30-4ddb-a454-3a08b6f1855d\") " pod="kube-system/kube-proxy-wxtdm" Jan 15 23:50:50.567025 kubelet[3396]: I0115 23:50:50.566940 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9032264-db30-4ddb-a454-3a08b6f1855d-xtables-lock\") pod \"kube-proxy-wxtdm\" (UID: \"c9032264-db30-4ddb-a454-3a08b6f1855d\") " pod="kube-system/kube-proxy-wxtdm" Jan 15 23:50:50.567025 kubelet[3396]: I0115 23:50:50.566970 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9032264-db30-4ddb-a454-3a08b6f1855d-kube-proxy\") pod \"kube-proxy-wxtdm\" (UID: \"c9032264-db30-4ddb-a454-3a08b6f1855d\") " pod="kube-system/kube-proxy-wxtdm" Jan 15 23:50:50.567025 kubelet[3396]: I0115 23:50:50.566982 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlnx\" (UniqueName: \"kubernetes.io/projected/c9032264-db30-4ddb-a454-3a08b6f1855d-kube-api-access-4mlnx\") pod \"kube-proxy-wxtdm\" (UID: \"c9032264-db30-4ddb-a454-3a08b6f1855d\") " pod="kube-system/kube-proxy-wxtdm" Jan 15 23:50:50.703648 systemd[1]: Created slice kubepods-besteffort-pod560c1849_c448_4f78_819e_be29891e1036.slice - libcontainer container kubepods-besteffort-pod560c1849_c448_4f78_819e_be29891e1036.slice. Jan 15 23:50:50.769367 kubelet[3396]: I0115 23:50:50.769283 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kz58\" (UniqueName: \"kubernetes.io/projected/560c1849-c448-4f78-819e-be29891e1036-kube-api-access-9kz58\") pod \"tigera-operator-7dcd859c48-jb54q\" (UID: \"560c1849-c448-4f78-819e-be29891e1036\") " pod="tigera-operator/tigera-operator-7dcd859c48-jb54q" Jan 15 23:50:50.769623 kubelet[3396]: I0115 23:50:50.769426 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/560c1849-c448-4f78-819e-be29891e1036-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jb54q\" (UID: \"560c1849-c448-4f78-819e-be29891e1036\") " pod="tigera-operator/tigera-operator-7dcd859c48-jb54q" Jan 15 23:50:51.007041 containerd[1898]: time="2026-01-15T23:50:51.007002944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jb54q,Uid:560c1849-c448-4f78-819e-be29891e1036,Namespace:tigera-operator,Attempt:0,}" Jan 15 23:50:51.053719 containerd[1898]: time="2026-01-15T23:50:51.053598301Z" level=info msg="connecting to shim 0fdfe3aa9242b20fc05ee8f5119ff88025c59de6e964fe8097294a5f1585f444" address="unix:///run/containerd/s/300a9ddb1c458f26f4ffd41278650f92776412e06660297bfc9f49ba9654be83" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:51.081649 systemd[1]: Started cri-containerd-0fdfe3aa9242b20fc05ee8f5119ff88025c59de6e964fe8097294a5f1585f444.scope - libcontainer container 0fdfe3aa9242b20fc05ee8f5119ff88025c59de6e964fe8097294a5f1585f444. Jan 15 23:50:51.116612 containerd[1898]: time="2026-01-15T23:50:51.116556786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jb54q,Uid:560c1849-c448-4f78-819e-be29891e1036,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0fdfe3aa9242b20fc05ee8f5119ff88025c59de6e964fe8097294a5f1585f444\"" Jan 15 23:50:51.118590 containerd[1898]: time="2026-01-15T23:50:51.118550136Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 23:50:51.457089 containerd[1898]: time="2026-01-15T23:50:51.456961352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wxtdm,Uid:c9032264-db30-4ddb-a454-3a08b6f1855d,Namespace:kube-system,Attempt:0,}" Jan 15 23:50:51.499481 containerd[1898]: time="2026-01-15T23:50:51.499325503Z" level=info msg="connecting to shim d5b561176373f904f29cb3efcb47749769472934ffdfe2befeb23c0916cb85d6" address="unix:///run/containerd/s/8e9f579624f4932286f8c0b6c1a7a05190ba327eade7fc255ebe04bca1ab3df3" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:51.520686 systemd[1]: Started cri-containerd-d5b561176373f904f29cb3efcb47749769472934ffdfe2befeb23c0916cb85d6.scope - libcontainer container d5b561176373f904f29cb3efcb47749769472934ffdfe2befeb23c0916cb85d6. Jan 15 23:50:51.544123 containerd[1898]: time="2026-01-15T23:50:51.543994673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wxtdm,Uid:c9032264-db30-4ddb-a454-3a08b6f1855d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5b561176373f904f29cb3efcb47749769472934ffdfe2befeb23c0916cb85d6\"" Jan 15 23:50:51.547471 containerd[1898]: time="2026-01-15T23:50:51.547430469Z" level=info msg="CreateContainer within sandbox \"d5b561176373f904f29cb3efcb47749769472934ffdfe2befeb23c0916cb85d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 23:50:51.571519 containerd[1898]: time="2026-01-15T23:50:51.570601850Z" level=info msg="Container 3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:51.592475 containerd[1898]: time="2026-01-15T23:50:51.592426589Z" level=info msg="CreateContainer within sandbox \"d5b561176373f904f29cb3efcb47749769472934ffdfe2befeb23c0916cb85d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958\"" Jan 15 23:50:51.594936 containerd[1898]: time="2026-01-15T23:50:51.594901568Z" level=info msg="StartContainer for \"3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958\"" Jan 15 23:50:51.596140 containerd[1898]: time="2026-01-15T23:50:51.596115260Z" level=info msg="connecting to shim 3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958" address="unix:///run/containerd/s/8e9f579624f4932286f8c0b6c1a7a05190ba327eade7fc255ebe04bca1ab3df3" protocol=ttrpc version=3 Jan 15 23:50:51.612648 systemd[1]: Started cri-containerd-3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958.scope - libcontainer container 3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958. Jan 15 23:50:51.691121 containerd[1898]: time="2026-01-15T23:50:51.690882561Z" level=info msg="StartContainer for \"3a00175afad9282151bf05c33a9cd5cc8d3fe65c001a52cd2c197a14bd97f958\" returns successfully" Jan 15 23:50:52.339403 kubelet[3396]: I0115 23:50:52.339342 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wxtdm" podStartSLOduration=2.33932087 podStartE2EDuration="2.33932087s" podCreationTimestamp="2026-01-15 23:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:50:52.197511424 +0000 UTC m=+8.208940110" watchObservedRunningTime="2026-01-15 23:50:52.33932087 +0000 UTC m=+8.350749564" Jan 15 23:50:52.776196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869925208.mount: Deactivated successfully. Jan 15 23:50:53.824237 containerd[1898]: time="2026-01-15T23:50:53.823671497Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:53.827456 containerd[1898]: time="2026-01-15T23:50:53.827422097Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 15 23:50:53.831589 containerd[1898]: time="2026-01-15T23:50:53.831562353Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:53.836055 containerd[1898]: time="2026-01-15T23:50:53.836025463Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:53.836423 containerd[1898]: time="2026-01-15T23:50:53.836395495Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.71780403s" Jan 15 23:50:53.836423 containerd[1898]: time="2026-01-15T23:50:53.836424736Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 15 23:50:53.838667 containerd[1898]: time="2026-01-15T23:50:53.838633734Z" level=info msg="CreateContainer within sandbox \"0fdfe3aa9242b20fc05ee8f5119ff88025c59de6e964fe8097294a5f1585f444\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 23:50:53.865514 containerd[1898]: time="2026-01-15T23:50:53.864421560Z" level=info msg="Container 3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:53.864698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456799960.mount: Deactivated successfully. Jan 15 23:50:53.883152 containerd[1898]: time="2026-01-15T23:50:53.883058298Z" level=info msg="CreateContainer within sandbox \"0fdfe3aa9242b20fc05ee8f5119ff88025c59de6e964fe8097294a5f1585f444\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff\"" Jan 15 23:50:53.884476 containerd[1898]: time="2026-01-15T23:50:53.884442117Z" level=info msg="StartContainer for \"3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff\"" Jan 15 23:50:53.886986 containerd[1898]: time="2026-01-15T23:50:53.886932391Z" level=info msg="connecting to shim 3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff" address="unix:///run/containerd/s/300a9ddb1c458f26f4ffd41278650f92776412e06660297bfc9f49ba9654be83" protocol=ttrpc version=3 Jan 15 23:50:53.904629 systemd[1]: Started cri-containerd-3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff.scope - libcontainer container 3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff. Jan 15 23:50:53.931592 containerd[1898]: time="2026-01-15T23:50:53.931554226Z" level=info msg="StartContainer for \"3d8fb352e29b65a971b6839fcc4ac786041ba1ead29cf41249aa26ed8ce40bff\" returns successfully" Jan 15 23:50:54.839098 kubelet[3396]: I0115 23:50:54.839024 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jb54q" podStartSLOduration=2.119758153 podStartE2EDuration="4.839007236s" podCreationTimestamp="2026-01-15 23:50:50 +0000 UTC" firstStartedPulling="2026-01-15 23:50:51.118061003 +0000 UTC m=+7.129489689" lastFinishedPulling="2026-01-15 23:50:53.837310086 +0000 UTC m=+9.848738772" observedRunningTime="2026-01-15 23:50:54.20922676 +0000 UTC m=+10.220655454" watchObservedRunningTime="2026-01-15 23:50:54.839007236 +0000 UTC m=+10.850435930" Jan 15 23:50:59.113614 sudo[2366]: pam_unix(sudo:session): session closed for user root Jan 15 23:50:59.188596 sshd[2365]: Connection closed by 10.200.16.10 port 41894 Jan 15 23:50:59.189711 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:59.194533 systemd[1]: sshd@6-10.200.20.29:22-10.200.16.10:41894.service: Deactivated successfully. Jan 15 23:50:59.200801 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 23:50:59.202104 systemd[1]: session-9.scope: Consumed 3.059s CPU time, 224.8M memory peak. Jan 15 23:50:59.205213 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. Jan 15 23:50:59.208476 systemd-logind[1872]: Removed session 9. Jan 15 23:51:05.893014 systemd[1]: Created slice kubepods-besteffort-pode76a9956_0e2f_4ee8_a92c_b82ec39f3e06.slice - libcontainer container kubepods-besteffort-pode76a9956_0e2f_4ee8_a92c_b82ec39f3e06.slice. Jan 15 23:51:05.955964 kubelet[3396]: I0115 23:51:05.955920 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddnq\" (UniqueName: \"kubernetes.io/projected/e76a9956-0e2f-4ee8-a92c-b82ec39f3e06-kube-api-access-mddnq\") pod \"calico-typha-66bb67fd95-bw28q\" (UID: \"e76a9956-0e2f-4ee8-a92c-b82ec39f3e06\") " pod="calico-system/calico-typha-66bb67fd95-bw28q" Jan 15 23:51:05.956532 kubelet[3396]: I0115 23:51:05.956425 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e76a9956-0e2f-4ee8-a92c-b82ec39f3e06-tigera-ca-bundle\") pod \"calico-typha-66bb67fd95-bw28q\" (UID: \"e76a9956-0e2f-4ee8-a92c-b82ec39f3e06\") " pod="calico-system/calico-typha-66bb67fd95-bw28q" Jan 15 23:51:05.956532 kubelet[3396]: I0115 23:51:05.956467 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e76a9956-0e2f-4ee8-a92c-b82ec39f3e06-typha-certs\") pod \"calico-typha-66bb67fd95-bw28q\" (UID: \"e76a9956-0e2f-4ee8-a92c-b82ec39f3e06\") " pod="calico-system/calico-typha-66bb67fd95-bw28q" Jan 15 23:51:06.078168 systemd[1]: Created slice kubepods-besteffort-pod69694cda_d8ff_4381_ad20_e85637f6e94b.slice - libcontainer container kubepods-besteffort-pod69694cda_d8ff_4381_ad20_e85637f6e94b.slice. Jan 15 23:51:06.158131 kubelet[3396]: I0115 23:51:06.158014 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/69694cda-d8ff-4381-ad20-e85637f6e94b-node-certs\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158517 kubelet[3396]: I0115 23:51:06.158473 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-cni-log-dir\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158638 kubelet[3396]: I0115 23:51:06.158615 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-var-lib-calico\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158721 kubelet[3396]: I0115 23:51:06.158651 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-flexvol-driver-host\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158721 kubelet[3396]: I0115 23:51:06.158678 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-lib-modules\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158721 kubelet[3396]: I0115 23:51:06.158705 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-policysync\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158721 kubelet[3396]: I0115 23:51:06.158717 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69694cda-d8ff-4381-ad20-e85637f6e94b-tigera-ca-bundle\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158721 kubelet[3396]: I0115 23:51:06.158729 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-xtables-lock\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158913 kubelet[3396]: I0115 23:51:06.158741 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-cni-bin-dir\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158913 kubelet[3396]: I0115 23:51:06.158766 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-cni-net-dir\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158913 kubelet[3396]: I0115 23:51:06.158821 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94fn\" (UniqueName: \"kubernetes.io/projected/69694cda-d8ff-4381-ad20-e85637f6e94b-kube-api-access-v94fn\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.158913 kubelet[3396]: I0115 23:51:06.158845 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/69694cda-d8ff-4381-ad20-e85637f6e94b-var-run-calico\") pod \"calico-node-dhdvm\" (UID: \"69694cda-d8ff-4381-ad20-e85637f6e94b\") " pod="calico-system/calico-node-dhdvm" Jan 15 23:51:06.196944 containerd[1898]: time="2026-01-15T23:51:06.196673719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66bb67fd95-bw28q,Uid:e76a9956-0e2f-4ee8-a92c-b82ec39f3e06,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:06.260759 containerd[1898]: time="2026-01-15T23:51:06.260321482Z" level=info msg="connecting to shim 1cf47e4b8f13330a14ae1f0914b8a0bef8f04947bd87f1a9fc034bfabbd07e4b" address="unix:///run/containerd/s/ee9bff095e31f058b010698453fff4dbe1c33bac20b165a7f4acc8dd0449e345" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:06.264713 kubelet[3396]: E0115 23:51:06.264678 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.264713 kubelet[3396]: W0115 23:51:06.264700 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.264846 kubelet[3396]: E0115 23:51:06.264730 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.265124 kubelet[3396]: E0115 23:51:06.265093 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.265124 kubelet[3396]: W0115 23:51:06.265110 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.265124 kubelet[3396]: E0115 23:51:06.265122 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.269515 kubelet[3396]: E0115 23:51:06.268344 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.269515 kubelet[3396]: W0115 23:51:06.268464 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.269515 kubelet[3396]: E0115 23:51:06.268592 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.269515 kubelet[3396]: E0115 23:51:06.269310 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.269515 kubelet[3396]: W0115 23:51:06.269320 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.269671 kubelet[3396]: E0115 23:51:06.269646 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.270314 kubelet[3396]: E0115 23:51:06.270080 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.270314 kubelet[3396]: W0115 23:51:06.270095 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.270471 kubelet[3396]: E0115 23:51:06.270444 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.273910 kubelet[3396]: E0115 23:51:06.271543 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.273910 kubelet[3396]: W0115 23:51:06.271557 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.273910 kubelet[3396]: E0115 23:51:06.271569 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.277559 kubelet[3396]: E0115 23:51:06.277537 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.278384 kubelet[3396]: W0115 23:51:06.278206 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.278384 kubelet[3396]: E0115 23:51:06.278231 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.286318 kubelet[3396]: E0115 23:51:06.286275 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.286318 kubelet[3396]: W0115 23:51:06.286312 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.286424 kubelet[3396]: E0115 23:51:06.286330 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.301091 kubelet[3396]: E0115 23:51:06.300720 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:06.302647 systemd[1]: Started cri-containerd-1cf47e4b8f13330a14ae1f0914b8a0bef8f04947bd87f1a9fc034bfabbd07e4b.scope - libcontainer container 1cf47e4b8f13330a14ae1f0914b8a0bef8f04947bd87f1a9fc034bfabbd07e4b. Jan 15 23:51:06.353473 kubelet[3396]: E0115 23:51:06.353351 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.353473 kubelet[3396]: W0115 23:51:06.353376 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.353473 kubelet[3396]: E0115 23:51:06.353399 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.353720 kubelet[3396]: E0115 23:51:06.353707 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.353814 kubelet[3396]: W0115 23:51:06.353773 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.353867 kubelet[3396]: E0115 23:51:06.353854 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.354137 kubelet[3396]: E0115 23:51:06.354051 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.354137 kubelet[3396]: W0115 23:51:06.354061 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.354137 kubelet[3396]: E0115 23:51:06.354070 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.354271 kubelet[3396]: E0115 23:51:06.354261 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.354319 kubelet[3396]: W0115 23:51:06.354309 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.354372 kubelet[3396]: E0115 23:51:06.354360 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.354648 kubelet[3396]: E0115 23:51:06.354634 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.354814 kubelet[3396]: W0115 23:51:06.354714 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.354814 kubelet[3396]: E0115 23:51:06.354729 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.354929 kubelet[3396]: E0115 23:51:06.354917 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.354977 kubelet[3396]: W0115 23:51:06.354969 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.355027 kubelet[3396]: E0115 23:51:06.355016 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.355275 kubelet[3396]: E0115 23:51:06.355193 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.355275 kubelet[3396]: W0115 23:51:06.355202 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.355275 kubelet[3396]: E0115 23:51:06.355211 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.355783 kubelet[3396]: E0115 23:51:06.355571 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.355783 kubelet[3396]: W0115 23:51:06.355583 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.355783 kubelet[3396]: E0115 23:51:06.355594 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.356560 kubelet[3396]: E0115 23:51:06.356435 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.356560 kubelet[3396]: W0115 23:51:06.356447 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.356560 kubelet[3396]: E0115 23:51:06.356458 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.357079 kubelet[3396]: E0115 23:51:06.356848 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.357079 kubelet[3396]: W0115 23:51:06.356860 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.357079 kubelet[3396]: E0115 23:51:06.356870 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.357693 kubelet[3396]: E0115 23:51:06.357217 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.357751 containerd[1898]: time="2026-01-15T23:51:06.357243168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66bb67fd95-bw28q,Uid:e76a9956-0e2f-4ee8-a92c-b82ec39f3e06,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cf47e4b8f13330a14ae1f0914b8a0bef8f04947bd87f1a9fc034bfabbd07e4b\"" Jan 15 23:51:06.357909 kubelet[3396]: W0115 23:51:06.357810 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.357909 kubelet[3396]: E0115 23:51:06.357832 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.358093 kubelet[3396]: E0115 23:51:06.358083 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.358241 kubelet[3396]: W0115 23:51:06.358146 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.358241 kubelet[3396]: E0115 23:51:06.358161 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.358953 kubelet[3396]: E0115 23:51:06.358848 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.358953 kubelet[3396]: W0115 23:51:06.358861 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.358953 kubelet[3396]: E0115 23:51:06.358879 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.359224 containerd[1898]: time="2026-01-15T23:51:06.359201916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 23:51:06.359662 kubelet[3396]: E0115 23:51:06.359628 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.359829 kubelet[3396]: W0115 23:51:06.359729 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.359829 kubelet[3396]: E0115 23:51:06.359746 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.360055 kubelet[3396]: E0115 23:51:06.359966 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.360055 kubelet[3396]: W0115 23:51:06.359993 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.360055 kubelet[3396]: E0115 23:51:06.360004 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.360268 kubelet[3396]: E0115 23:51:06.360257 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.360426 kubelet[3396]: W0115 23:51:06.360323 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.360426 kubelet[3396]: E0115 23:51:06.360339 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.360636 kubelet[3396]: E0115 23:51:06.360625 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.360778 kubelet[3396]: W0115 23:51:06.360690 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.360778 kubelet[3396]: E0115 23:51:06.360707 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.361042 kubelet[3396]: E0115 23:51:06.360961 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.361042 kubelet[3396]: W0115 23:51:06.360971 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.361042 kubelet[3396]: E0115 23:51:06.360980 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.361173 kubelet[3396]: E0115 23:51:06.361162 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.361284 kubelet[3396]: W0115 23:51:06.361272 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.361413 kubelet[3396]: E0115 23:51:06.361330 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.361572 kubelet[3396]: E0115 23:51:06.361561 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.361819 kubelet[3396]: W0115 23:51:06.361627 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.361819 kubelet[3396]: E0115 23:51:06.361641 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.362006 kubelet[3396]: E0115 23:51:06.361994 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.362065 kubelet[3396]: W0115 23:51:06.362056 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.362113 kubelet[3396]: E0115 23:51:06.362102 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.362181 kubelet[3396]: I0115 23:51:06.362169 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68t72\" (UniqueName: \"kubernetes.io/projected/51d6de9e-9409-4d26-91e2-95ebd2fa7a0f-kube-api-access-68t72\") pod \"csi-node-driver-jjz87\" (UID: \"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f\") " pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:06.362803 kubelet[3396]: E0115 23:51:06.362775 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.362856 kubelet[3396]: W0115 23:51:06.362811 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.362856 kubelet[3396]: E0115 23:51:06.362828 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.363260 kubelet[3396]: E0115 23:51:06.363145 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.363308 kubelet[3396]: W0115 23:51:06.363260 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.363351 kubelet[3396]: E0115 23:51:06.363333 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.363827 kubelet[3396]: E0115 23:51:06.363806 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.363827 kubelet[3396]: W0115 23:51:06.363821 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.363892 kubelet[3396]: E0115 23:51:06.363831 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.364288 kubelet[3396]: I0115 23:51:06.364184 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/51d6de9e-9409-4d26-91e2-95ebd2fa7a0f-varrun\") pod \"csi-node-driver-jjz87\" (UID: \"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f\") " pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:06.364769 kubelet[3396]: E0115 23:51:06.364747 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.364769 kubelet[3396]: W0115 23:51:06.364767 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.364839 kubelet[3396]: E0115 23:51:06.364781 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.364958 kubelet[3396]: I0115 23:51:06.364851 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51d6de9e-9409-4d26-91e2-95ebd2fa7a0f-registration-dir\") pod \"csi-node-driver-jjz87\" (UID: \"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f\") " pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:06.365081 kubelet[3396]: E0115 23:51:06.365067 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.365221 kubelet[3396]: W0115 23:51:06.365128 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.365221 kubelet[3396]: E0115 23:51:06.365151 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.365371 kubelet[3396]: E0115 23:51:06.365359 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.365422 kubelet[3396]: W0115 23:51:06.365412 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.365474 kubelet[3396]: E0115 23:51:06.365464 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.365675 kubelet[3396]: E0115 23:51:06.365655 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.365675 kubelet[3396]: W0115 23:51:06.365671 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.365763 kubelet[3396]: E0115 23:51:06.365686 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.365763 kubelet[3396]: I0115 23:51:06.365706 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51d6de9e-9409-4d26-91e2-95ebd2fa7a0f-kubelet-dir\") pod \"csi-node-driver-jjz87\" (UID: \"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f\") " pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:06.365957 kubelet[3396]: E0115 23:51:06.365854 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.365957 kubelet[3396]: W0115 23:51:06.365866 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.365957 kubelet[3396]: E0115 23:51:06.365876 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.365957 kubelet[3396]: I0115 23:51:06.365887 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51d6de9e-9409-4d26-91e2-95ebd2fa7a0f-socket-dir\") pod \"csi-node-driver-jjz87\" (UID: \"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f\") " pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:06.366109 kubelet[3396]: E0115 23:51:06.366096 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.366890 kubelet[3396]: W0115 23:51:06.366844 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.366890 kubelet[3396]: E0115 23:51:06.366876 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.367043 kubelet[3396]: E0115 23:51:06.367032 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.367043 kubelet[3396]: W0115 23:51:06.367041 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.367195 kubelet[3396]: E0115 23:51:06.367091 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.367270 kubelet[3396]: E0115 23:51:06.367258 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.367270 kubelet[3396]: W0115 23:51:06.367267 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.367371 kubelet[3396]: E0115 23:51:06.367316 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.367434 kubelet[3396]: E0115 23:51:06.367374 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.367434 kubelet[3396]: W0115 23:51:06.367380 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.367434 kubelet[3396]: E0115 23:51:06.367391 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.367640 kubelet[3396]: E0115 23:51:06.367622 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.367640 kubelet[3396]: W0115 23:51:06.367633 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.367640 kubelet[3396]: E0115 23:51:06.367639 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.367763 kubelet[3396]: E0115 23:51:06.367753 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.367763 kubelet[3396]: W0115 23:51:06.367760 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.367810 kubelet[3396]: E0115 23:51:06.367767 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.382601 containerd[1898]: time="2026-01-15T23:51:06.382560712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dhdvm,Uid:69694cda-d8ff-4381-ad20-e85637f6e94b,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:06.466578 containerd[1898]: time="2026-01-15T23:51:06.465767334Z" level=info msg="connecting to shim 7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b" address="unix:///run/containerd/s/da979846a716fdba120cbe2a84e6412c063a3269e8302513db1c27237b2ba264" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:06.467588 kubelet[3396]: E0115 23:51:06.467296 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.467588 kubelet[3396]: W0115 23:51:06.467430 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.467588 kubelet[3396]: E0115 23:51:06.467453 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.468910 kubelet[3396]: E0115 23:51:06.468847 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.468910 kubelet[3396]: W0115 23:51:06.468878 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.468910 kubelet[3396]: E0115 23:51:06.468902 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.469647 kubelet[3396]: E0115 23:51:06.469547 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.469647 kubelet[3396]: W0115 23:51:06.469563 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.470133 kubelet[3396]: E0115 23:51:06.470112 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.470598 kubelet[3396]: E0115 23:51:06.470575 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.470598 kubelet[3396]: W0115 23:51:06.470594 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.470708 kubelet[3396]: E0115 23:51:06.470612 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.471563 kubelet[3396]: E0115 23:51:06.471543 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.471563 kubelet[3396]: W0115 23:51:06.471559 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.471666 kubelet[3396]: E0115 23:51:06.471575 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.472150 kubelet[3396]: E0115 23:51:06.472133 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.472150 kubelet[3396]: W0115 23:51:06.472148 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.472297 kubelet[3396]: E0115 23:51:06.472242 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.472564 kubelet[3396]: E0115 23:51:06.472545 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.472564 kubelet[3396]: W0115 23:51:06.472558 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.472967 kubelet[3396]: E0115 23:51:06.472914 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.473207 kubelet[3396]: E0115 23:51:06.473187 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.473207 kubelet[3396]: W0115 23:51:06.473202 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.473331 kubelet[3396]: E0115 23:51:06.473258 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.473471 kubelet[3396]: E0115 23:51:06.473453 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.473471 kubelet[3396]: W0115 23:51:06.473468 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.473607 kubelet[3396]: E0115 23:51:06.473512 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.473769 kubelet[3396]: E0115 23:51:06.473750 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.473769 kubelet[3396]: W0115 23:51:06.473762 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.473769 kubelet[3396]: E0115 23:51:06.473785 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.474299 kubelet[3396]: E0115 23:51:06.474281 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.474299 kubelet[3396]: W0115 23:51:06.474297 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.474299 kubelet[3396]: E0115 23:51:06.474324 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.474656 kubelet[3396]: E0115 23:51:06.474616 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.474656 kubelet[3396]: W0115 23:51:06.474628 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.474989 kubelet[3396]: E0115 23:51:06.474684 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.474989 kubelet[3396]: E0115 23:51:06.474808 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.474989 kubelet[3396]: W0115 23:51:06.474817 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.474989 kubelet[3396]: E0115 23:51:06.474931 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.474989 kubelet[3396]: W0115 23:51:06.474937 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.475076 kubelet[3396]: E0115 23:51:06.475046 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.475076 kubelet[3396]: W0115 23:51:06.475054 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475151 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.475668 kubelet[3396]: W0115 23:51:06.475187 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475199 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475304 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.475668 kubelet[3396]: W0115 23:51:06.475310 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475319 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475470 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.475668 kubelet[3396]: W0115 23:51:06.475476 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475512 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.475668 kubelet[3396]: E0115 23:51:06.475533 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.476188 kubelet[3396]: E0115 23:51:06.475901 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.476188 kubelet[3396]: W0115 23:51:06.475922 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.476188 kubelet[3396]: E0115 23:51:06.475934 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.476188 kubelet[3396]: E0115 23:51:06.476043 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.476188 kubelet[3396]: E0115 23:51:06.476060 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.476613 kubelet[3396]: E0115 23:51:06.476595 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.476613 kubelet[3396]: W0115 23:51:06.476609 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.476613 kubelet[3396]: E0115 23:51:06.476624 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.477101 kubelet[3396]: E0115 23:51:06.476765 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.477101 kubelet[3396]: W0115 23:51:06.476785 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.477101 kubelet[3396]: E0115 23:51:06.476793 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.477101 kubelet[3396]: E0115 23:51:06.476943 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.477101 kubelet[3396]: W0115 23:51:06.476949 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.477101 kubelet[3396]: E0115 23:51:06.476957 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.477101 kubelet[3396]: E0115 23:51:06.477067 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.477101 kubelet[3396]: W0115 23:51:06.477073 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.477101 kubelet[3396]: E0115 23:51:06.477091 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.477991 kubelet[3396]: E0115 23:51:06.477969 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.477991 kubelet[3396]: W0115 23:51:06.477984 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.477991 kubelet[3396]: E0115 23:51:06.477996 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.478226 kubelet[3396]: E0115 23:51:06.478206 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.478226 kubelet[3396]: W0115 23:51:06.478220 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.478226 kubelet[3396]: E0115 23:51:06.478228 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.482555 kubelet[3396]: E0115 23:51:06.482500 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:06.482555 kubelet[3396]: W0115 23:51:06.482516 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:06.482555 kubelet[3396]: E0115 23:51:06.482530 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:06.492639 systemd[1]: Started cri-containerd-7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b.scope - libcontainer container 7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b. Jan 15 23:51:06.518285 containerd[1898]: time="2026-01-15T23:51:06.518242589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dhdvm,Uid:69694cda-d8ff-4381-ad20-e85637f6e94b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\"" Jan 15 23:51:07.907715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036604897.mount: Deactivated successfully. Jan 15 23:51:08.138184 kubelet[3396]: E0115 23:51:08.138142 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:08.393997 containerd[1898]: time="2026-01-15T23:51:08.393933772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:08.397045 containerd[1898]: time="2026-01-15T23:51:08.396894155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 15 23:51:08.402337 containerd[1898]: time="2026-01-15T23:51:08.402304746Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:08.406855 containerd[1898]: time="2026-01-15T23:51:08.406815090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:08.407402 containerd[1898]: time="2026-01-15T23:51:08.407364065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.04808181s" Jan 15 23:51:08.407541 containerd[1898]: time="2026-01-15T23:51:08.407393723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 15 23:51:08.408713 containerd[1898]: time="2026-01-15T23:51:08.408682026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 23:51:08.422317 containerd[1898]: time="2026-01-15T23:51:08.422283518Z" level=info msg="CreateContainer within sandbox \"1cf47e4b8f13330a14ae1f0914b8a0bef8f04947bd87f1a9fc034bfabbd07e4b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 23:51:08.450511 containerd[1898]: time="2026-01-15T23:51:08.449178825Z" level=info msg="Container ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:08.453413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016217510.mount: Deactivated successfully. Jan 15 23:51:08.470912 containerd[1898]: time="2026-01-15T23:51:08.470810716Z" level=info msg="CreateContainer within sandbox \"1cf47e4b8f13330a14ae1f0914b8a0bef8f04947bd87f1a9fc034bfabbd07e4b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2\"" Jan 15 23:51:08.471914 containerd[1898]: time="2026-01-15T23:51:08.471848704Z" level=info msg="StartContainer for \"ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2\"" Jan 15 23:51:08.473023 containerd[1898]: time="2026-01-15T23:51:08.472982616Z" level=info msg="connecting to shim ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2" address="unix:///run/containerd/s/ee9bff095e31f058b010698453fff4dbe1c33bac20b165a7f4acc8dd0449e345" protocol=ttrpc version=3 Jan 15 23:51:08.489637 systemd[1]: Started cri-containerd-ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2.scope - libcontainer container ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2. Jan 15 23:51:08.525506 containerd[1898]: time="2026-01-15T23:51:08.525411637Z" level=info msg="StartContainer for \"ddb7e5ce557786e225d6edb9068ce57f00e49895dbffe2ed3c0c42026cebcad2\" returns successfully" Jan 15 23:51:09.251070 kubelet[3396]: I0115 23:51:09.251009 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66bb67fd95-bw28q" podStartSLOduration=2.201439942 podStartE2EDuration="4.250989534s" podCreationTimestamp="2026-01-15 23:51:05 +0000 UTC" firstStartedPulling="2026-01-15 23:51:06.358870478 +0000 UTC m=+22.370299172" lastFinishedPulling="2026-01-15 23:51:08.408420078 +0000 UTC m=+24.419848764" observedRunningTime="2026-01-15 23:51:09.238020733 +0000 UTC m=+25.249449427" watchObservedRunningTime="2026-01-15 23:51:09.250989534 +0000 UTC m=+25.262418220" Jan 15 23:51:09.283592 kubelet[3396]: E0115 23:51:09.283568 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.283952 kubelet[3396]: W0115 23:51:09.283869 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.283952 kubelet[3396]: E0115 23:51:09.283896 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.284477 kubelet[3396]: E0115 23:51:09.284415 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.284477 kubelet[3396]: W0115 23:51:09.284429 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.284477 kubelet[3396]: E0115 23:51:09.284441 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.284795 kubelet[3396]: E0115 23:51:09.284733 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.284795 kubelet[3396]: W0115 23:51:09.284745 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.284795 kubelet[3396]: E0115 23:51:09.284755 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.285251 kubelet[3396]: E0115 23:51:09.285076 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.285251 kubelet[3396]: W0115 23:51:09.285088 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.285251 kubelet[3396]: E0115 23:51:09.285099 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.285382 kubelet[3396]: E0115 23:51:09.285357 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.285382 kubelet[3396]: W0115 23:51:09.285375 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.285437 kubelet[3396]: E0115 23:51:09.285386 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.285530 kubelet[3396]: E0115 23:51:09.285517 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.285530 kubelet[3396]: W0115 23:51:09.285526 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.285567 kubelet[3396]: E0115 23:51:09.285539 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.285680 kubelet[3396]: E0115 23:51:09.285667 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.285680 kubelet[3396]: W0115 23:51:09.285675 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.285680 kubelet[3396]: E0115 23:51:09.285682 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.285794 kubelet[3396]: E0115 23:51:09.285781 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.285794 kubelet[3396]: W0115 23:51:09.285789 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.285794 kubelet[3396]: E0115 23:51:09.285795 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.285928 kubelet[3396]: E0115 23:51:09.285915 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.285928 kubelet[3396]: W0115 23:51:09.285924 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.285980 kubelet[3396]: E0115 23:51:09.285930 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.286023 kubelet[3396]: E0115 23:51:09.286012 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.286023 kubelet[3396]: W0115 23:51:09.286019 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.286023 kubelet[3396]: E0115 23:51:09.286024 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.286124 kubelet[3396]: E0115 23:51:09.286113 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.286124 kubelet[3396]: W0115 23:51:09.286120 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.286168 kubelet[3396]: E0115 23:51:09.286126 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.286293 kubelet[3396]: E0115 23:51:09.286280 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.286293 kubelet[3396]: W0115 23:51:09.286289 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.286345 kubelet[3396]: E0115 23:51:09.286295 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.286414 kubelet[3396]: E0115 23:51:09.286402 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.286414 kubelet[3396]: W0115 23:51:09.286409 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.286460 kubelet[3396]: E0115 23:51:09.286415 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.286898 kubelet[3396]: E0115 23:51:09.286879 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.286937 kubelet[3396]: W0115 23:51:09.286893 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.286937 kubelet[3396]: E0115 23:51:09.286911 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.287056 kubelet[3396]: E0115 23:51:09.287046 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.287095 kubelet[3396]: W0115 23:51:09.287077 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.287095 kubelet[3396]: E0115 23:51:09.287085 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.293567 kubelet[3396]: E0115 23:51:09.293449 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.293567 kubelet[3396]: W0115 23:51:09.293469 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.293567 kubelet[3396]: E0115 23:51:09.293496 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.293816 kubelet[3396]: E0115 23:51:09.293776 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.293816 kubelet[3396]: W0115 23:51:09.293785 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.293816 kubelet[3396]: E0115 23:51:09.293795 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.294028 kubelet[3396]: E0115 23:51:09.294010 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.294028 kubelet[3396]: W0115 23:51:09.294021 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.294028 kubelet[3396]: E0115 23:51:09.294031 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.294431 kubelet[3396]: E0115 23:51:09.294353 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.294431 kubelet[3396]: W0115 23:51:09.294369 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.294431 kubelet[3396]: E0115 23:51:09.294388 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.294734 kubelet[3396]: E0115 23:51:09.294721 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.294792 kubelet[3396]: W0115 23:51:09.294781 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.294841 kubelet[3396]: E0115 23:51:09.294832 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.295150 kubelet[3396]: E0115 23:51:09.295027 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.295150 kubelet[3396]: W0115 23:51:09.295038 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.295150 kubelet[3396]: E0115 23:51:09.295053 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.295309 kubelet[3396]: E0115 23:51:09.295297 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.295361 kubelet[3396]: W0115 23:51:09.295352 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.295433 kubelet[3396]: E0115 23:51:09.295413 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.295695 kubelet[3396]: E0115 23:51:09.295606 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.295695 kubelet[3396]: W0115 23:51:09.295619 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.295695 kubelet[3396]: E0115 23:51:09.295641 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.295863 kubelet[3396]: E0115 23:51:09.295852 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.295907 kubelet[3396]: W0115 23:51:09.295898 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.295970 kubelet[3396]: E0115 23:51:09.295956 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.296259 kubelet[3396]: E0115 23:51:09.296138 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.296259 kubelet[3396]: W0115 23:51:09.296150 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.296259 kubelet[3396]: E0115 23:51:09.296167 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.296417 kubelet[3396]: E0115 23:51:09.296405 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.296583 kubelet[3396]: W0115 23:51:09.296464 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.296583 kubelet[3396]: E0115 23:51:09.296510 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.296892 kubelet[3396]: E0115 23:51:09.296877 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.297173 kubelet[3396]: W0115 23:51:09.297033 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.297173 kubelet[3396]: E0115 23:51:09.297063 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.297340 kubelet[3396]: E0115 23:51:09.297303 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.297340 kubelet[3396]: W0115 23:51:09.297315 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.297498 kubelet[3396]: E0115 23:51:09.297401 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.297655 kubelet[3396]: E0115 23:51:09.297563 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.297655 kubelet[3396]: W0115 23:51:09.297577 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.297655 kubelet[3396]: E0115 23:51:09.297588 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.297756 kubelet[3396]: E0115 23:51:09.297693 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.297756 kubelet[3396]: W0115 23:51:09.297699 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.297756 kubelet[3396]: E0115 23:51:09.297705 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.298054 kubelet[3396]: E0115 23:51:09.297808 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.298054 kubelet[3396]: W0115 23:51:09.297814 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.298054 kubelet[3396]: E0115 23:51:09.297818 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.298244 kubelet[3396]: E0115 23:51:09.298223 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.298244 kubelet[3396]: W0115 23:51:09.298233 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.298320 kubelet[3396]: E0115 23:51:09.298251 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.298451 kubelet[3396]: E0115 23:51:09.298438 3396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:51:09.298451 kubelet[3396]: W0115 23:51:09.298447 3396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:51:09.298578 kubelet[3396]: E0115 23:51:09.298453 3396 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:51:09.637178 containerd[1898]: time="2026-01-15T23:51:09.636853606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:09.644377 containerd[1898]: time="2026-01-15T23:51:09.644336073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 15 23:51:09.649831 containerd[1898]: time="2026-01-15T23:51:09.649784633Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:09.660720 containerd[1898]: time="2026-01-15T23:51:09.660649505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:09.661346 containerd[1898]: time="2026-01-15T23:51:09.661050528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.252332853s" Jan 15 23:51:09.661424 containerd[1898]: time="2026-01-15T23:51:09.661083545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 15 23:51:09.665547 containerd[1898]: time="2026-01-15T23:51:09.665514244Z" level=info msg="CreateContainer within sandbox \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 23:51:09.701252 containerd[1898]: time="2026-01-15T23:51:09.701214565Z" level=info msg="Container 9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:09.723086 containerd[1898]: time="2026-01-15T23:51:09.723030008Z" level=info msg="CreateContainer within sandbox \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197\"" Jan 15 23:51:09.724112 containerd[1898]: time="2026-01-15T23:51:09.724009148Z" level=info msg="StartContainer for \"9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197\"" Jan 15 23:51:09.726404 containerd[1898]: time="2026-01-15T23:51:09.726378579Z" level=info msg="connecting to shim 9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197" address="unix:///run/containerd/s/da979846a716fdba120cbe2a84e6412c063a3269e8302513db1c27237b2ba264" protocol=ttrpc version=3 Jan 15 23:51:09.747672 systemd[1]: Started cri-containerd-9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197.scope - libcontainer container 9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197. Jan 15 23:51:09.801275 containerd[1898]: time="2026-01-15T23:51:09.801064710Z" level=info msg="StartContainer for \"9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197\" returns successfully" Jan 15 23:51:09.807392 systemd[1]: cri-containerd-9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197.scope: Deactivated successfully. Jan 15 23:51:09.814116 containerd[1898]: time="2026-01-15T23:51:09.814068621Z" level=info msg="received container exit event container_id:\"9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197\" id:\"9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197\" pid:4072 exited_at:{seconds:1768521069 nanos:813627588}" Jan 15 23:51:09.837686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e0b36e393ecd3515601cb2e6fdd29704fb894cf45d64f389310f322268de197-rootfs.mount: Deactivated successfully. Jan 15 23:51:10.139012 kubelet[3396]: E0115 23:51:10.138655 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:11.230903 containerd[1898]: time="2026-01-15T23:51:11.230809721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 23:51:12.139726 kubelet[3396]: E0115 23:51:12.139684 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:13.301505 containerd[1898]: time="2026-01-15T23:51:13.301449859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:13.307508 containerd[1898]: time="2026-01-15T23:51:13.307381653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 15 23:51:13.317026 containerd[1898]: time="2026-01-15T23:51:13.316782751Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:13.321731 containerd[1898]: time="2026-01-15T23:51:13.321689420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:13.322227 containerd[1898]: time="2026-01-15T23:51:13.322204511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.091303362s" Jan 15 23:51:13.322227 containerd[1898]: time="2026-01-15T23:51:13.322228584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 15 23:51:13.324668 containerd[1898]: time="2026-01-15T23:51:13.324396151Z" level=info msg="CreateContainer within sandbox \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 23:51:13.348523 containerd[1898]: time="2026-01-15T23:51:13.347831309Z" level=info msg="Container 7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:13.368901 containerd[1898]: time="2026-01-15T23:51:13.368855851Z" level=info msg="CreateContainer within sandbox \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9\"" Jan 15 23:51:13.369549 containerd[1898]: time="2026-01-15T23:51:13.369505739Z" level=info msg="StartContainer for \"7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9\"" Jan 15 23:51:13.370657 containerd[1898]: time="2026-01-15T23:51:13.370631324Z" level=info msg="connecting to shim 7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9" address="unix:///run/containerd/s/da979846a716fdba120cbe2a84e6412c063a3269e8302513db1c27237b2ba264" protocol=ttrpc version=3 Jan 15 23:51:13.389635 systemd[1]: Started cri-containerd-7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9.scope - libcontainer container 7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9. Jan 15 23:51:13.448846 containerd[1898]: time="2026-01-15T23:51:13.448791447Z" level=info msg="StartContainer for \"7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9\" returns successfully" Jan 15 23:51:14.139130 kubelet[3396]: E0115 23:51:14.138682 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:14.552400 containerd[1898]: time="2026-01-15T23:51:14.552348827Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:51:14.554616 systemd[1]: cri-containerd-7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9.scope: Deactivated successfully. Jan 15 23:51:14.555243 systemd[1]: cri-containerd-7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9.scope: Consumed 342ms CPU time, 187.9M memory peak, 165.9M written to disk. Jan 15 23:51:14.556076 containerd[1898]: time="2026-01-15T23:51:14.555744280Z" level=info msg="received container exit event container_id:\"7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9\" id:\"7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9\" pid:4132 exited_at:{seconds:1768521074 nanos:555285751}" Jan 15 23:51:14.574780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7af99e91c6dc5ae4b0c7cbf1009194e959fdca0011895ad042ed4f4371bd88f9-rootfs.mount: Deactivated successfully. Jan 15 23:51:14.650526 kubelet[3396]: I0115 23:51:14.649983 3396 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 23:51:14.953282 kubelet[3396]: W0115 23:51:14.696567 3396 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4459.2.2-n-5fd64d3fe1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object Jan 15 23:51:14.953282 kubelet[3396]: E0115 23:51:14.696604 3396 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4459.2.2-n-5fd64d3fe1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object" logger="UnhandledError" Jan 15 23:51:14.953282 kubelet[3396]: W0115 23:51:14.696658 3396 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4459.2.2-n-5fd64d3fe1" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object Jan 15 23:51:14.953282 kubelet[3396]: E0115 23:51:14.696667 3396 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4459.2.2-n-5fd64d3fe1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object" logger="UnhandledError" Jan 15 23:51:14.953282 kubelet[3396]: W0115 23:51:14.697607 3396 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4459.2.2-n-5fd64d3fe1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object Jan 15 23:51:14.693215 systemd[1]: Created slice kubepods-burstable-pod087bf8a4_3520_43d2_81c1_cb6f45055422.slice - libcontainer container kubepods-burstable-pod087bf8a4_3520_43d2_81c1_cb6f45055422.slice. Jan 15 23:51:14.953564 kubelet[3396]: E0115 23:51:14.697629 3396 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4459.2.2-n-5fd64d3fe1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object" logger="UnhandledError" Jan 15 23:51:14.953564 kubelet[3396]: W0115 23:51:14.699314 3396 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459.2.2-n-5fd64d3fe1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object Jan 15 23:51:14.953564 kubelet[3396]: E0115 23:51:14.699336 3396 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459.2.2-n-5fd64d3fe1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459.2.2-n-5fd64d3fe1' and this object" logger="UnhandledError" Jan 15 23:51:14.953564 kubelet[3396]: I0115 23:51:14.728545 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e8e2e68-e4b7-4282-8047-eae44af7b067-goldmane-ca-bundle\") pod \"goldmane-666569f655-frkkk\" (UID: \"1e8e2e68-e4b7-4282-8047-eae44af7b067\") " pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:14.953564 kubelet[3396]: I0115 23:51:14.728971 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvng6\" (UniqueName: \"kubernetes.io/projected/087bf8a4-3520-43d2-81c1-cb6f45055422-kube-api-access-dvng6\") pod \"coredns-668d6bf9bc-g4xs2\" (UID: \"087bf8a4-3520-43d2-81c1-cb6f45055422\") " pod="kube-system/coredns-668d6bf9bc-g4xs2" Jan 15 23:51:14.704662 systemd[1]: Created slice kubepods-burstable-pod5ed7058e_8f78_4fb6_9780_d5925ccb54c0.slice - libcontainer container kubepods-burstable-pod5ed7058e_8f78_4fb6_9780_d5925ccb54c0.slice. Jan 15 23:51:14.953692 kubelet[3396]: I0115 23:51:14.729000 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1e8e2e68-e4b7-4282-8047-eae44af7b067-goldmane-key-pair\") pod \"goldmane-666569f655-frkkk\" (UID: \"1e8e2e68-e4b7-4282-8047-eae44af7b067\") " pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:14.953692 kubelet[3396]: I0115 23:51:14.729666 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8e2e68-e4b7-4282-8047-eae44af7b067-config\") pod \"goldmane-666569f655-frkkk\" (UID: \"1e8e2e68-e4b7-4282-8047-eae44af7b067\") " pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:14.953692 kubelet[3396]: I0115 23:51:14.729710 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18982541-a4f1-43eb-9f61-620f19f486a0-calico-apiserver-certs\") pod \"calico-apiserver-6f777f44fd-jl6lj\" (UID: \"18982541-a4f1-43eb-9f61-620f19f486a0\") " pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" Jan 15 23:51:14.953692 kubelet[3396]: I0115 23:51:14.729725 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmh86\" (UniqueName: \"kubernetes.io/projected/1e8e2e68-e4b7-4282-8047-eae44af7b067-kube-api-access-lmh86\") pod \"goldmane-666569f655-frkkk\" (UID: \"1e8e2e68-e4b7-4282-8047-eae44af7b067\") " pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:14.953692 kubelet[3396]: I0115 23:51:14.729742 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/087bf8a4-3520-43d2-81c1-cb6f45055422-config-volume\") pod \"coredns-668d6bf9bc-g4xs2\" (UID: \"087bf8a4-3520-43d2-81c1-cb6f45055422\") " pod="kube-system/coredns-668d6bf9bc-g4xs2" Jan 15 23:51:14.714894 systemd[1]: Created slice kubepods-besteffort-pod1e8e2e68_e4b7_4282_8047_eae44af7b067.slice - libcontainer container kubepods-besteffort-pod1e8e2e68_e4b7_4282_8047_eae44af7b067.slice. Jan 15 23:51:14.953802 kubelet[3396]: I0115 23:51:14.729757 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-backend-key-pair\") pod \"whisker-5cd968bf64-ntvvk\" (UID: \"a055550a-6ee1-4e86-b552-0ffc275a311d\") " pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:14.953802 kubelet[3396]: I0115 23:51:14.729770 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-ca-bundle\") pod \"whisker-5cd968bf64-ntvvk\" (UID: \"a055550a-6ee1-4e86-b552-0ffc275a311d\") " pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:14.953802 kubelet[3396]: I0115 23:51:14.729785 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ed7058e-8f78-4fb6-9780-d5925ccb54c0-config-volume\") pod \"coredns-668d6bf9bc-7kj8l\" (UID: \"5ed7058e-8f78-4fb6-9780-d5925ccb54c0\") " pod="kube-system/coredns-668d6bf9bc-7kj8l" Jan 15 23:51:14.953802 kubelet[3396]: I0115 23:51:14.729797 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p282p\" (UniqueName: \"kubernetes.io/projected/9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e-kube-api-access-p282p\") pod \"calico-apiserver-6f777f44fd-2tvx8\" (UID: \"9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e\") " pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" Jan 15 23:51:14.953802 kubelet[3396]: I0115 23:51:14.729808 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34651363-b157-4818-abe1-8475b0e41983-tigera-ca-bundle\") pod \"calico-kube-controllers-f78775f8f-kvtpl\" (UID: \"34651363-b157-4818-abe1-8475b0e41983\") " pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" Jan 15 23:51:14.722790 systemd[1]: Created slice kubepods-besteffort-pod9074b3a7_d0a0_4425_abc1_2be2e7ba0c1e.slice - libcontainer container kubepods-besteffort-pod9074b3a7_d0a0_4425_abc1_2be2e7ba0c1e.slice. Jan 15 23:51:14.953946 kubelet[3396]: I0115 23:51:14.729839 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r7kk\" (UniqueName: \"kubernetes.io/projected/a055550a-6ee1-4e86-b552-0ffc275a311d-kube-api-access-2r7kk\") pod \"whisker-5cd968bf64-ntvvk\" (UID: \"a055550a-6ee1-4e86-b552-0ffc275a311d\") " pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:14.953946 kubelet[3396]: I0115 23:51:14.729849 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwd99\" (UniqueName: \"kubernetes.io/projected/5ed7058e-8f78-4fb6-9780-d5925ccb54c0-kube-api-access-bwd99\") pod \"coredns-668d6bf9bc-7kj8l\" (UID: \"5ed7058e-8f78-4fb6-9780-d5925ccb54c0\") " pod="kube-system/coredns-668d6bf9bc-7kj8l" Jan 15 23:51:14.953946 kubelet[3396]: I0115 23:51:14.729861 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e-calico-apiserver-certs\") pod \"calico-apiserver-6f777f44fd-2tvx8\" (UID: \"9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e\") " pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" Jan 15 23:51:14.953946 kubelet[3396]: I0115 23:51:14.729872 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6fnr\" (UniqueName: \"kubernetes.io/projected/34651363-b157-4818-abe1-8475b0e41983-kube-api-access-g6fnr\") pod \"calico-kube-controllers-f78775f8f-kvtpl\" (UID: \"34651363-b157-4818-abe1-8475b0e41983\") " pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" Jan 15 23:51:14.953946 kubelet[3396]: I0115 23:51:14.729882 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt2j\" (UniqueName: \"kubernetes.io/projected/18982541-a4f1-43eb-9f61-620f19f486a0-kube-api-access-zrt2j\") pod \"calico-apiserver-6f777f44fd-jl6lj\" (UID: \"18982541-a4f1-43eb-9f61-620f19f486a0\") " pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" Jan 15 23:51:14.728585 systemd[1]: Created slice kubepods-besteffort-poda055550a_6ee1_4e86_b552_0ffc275a311d.slice - libcontainer container kubepods-besteffort-poda055550a_6ee1_4e86_b552_0ffc275a311d.slice. Jan 15 23:51:14.738102 systemd[1]: Created slice kubepods-besteffort-pod34651363_b157_4818_abe1_8475b0e41983.slice - libcontainer container kubepods-besteffort-pod34651363_b157_4818_abe1_8475b0e41983.slice. Jan 15 23:51:14.744352 systemd[1]: Created slice kubepods-besteffort-pod18982541_a4f1_43eb_9f61_620f19f486a0.slice - libcontainer container kubepods-besteffort-pod18982541_a4f1_43eb_9f61_620f19f486a0.slice. Jan 15 23:51:15.256303 containerd[1898]: time="2026-01-15T23:51:15.256101596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4xs2,Uid:087bf8a4-3520-43d2-81c1-cb6f45055422,Namespace:kube-system,Attempt:0,}" Jan 15 23:51:15.264217 containerd[1898]: time="2026-01-15T23:51:15.264164660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f78775f8f-kvtpl,Uid:34651363-b157-4818-abe1-8475b0e41983,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:15.283666 containerd[1898]: time="2026-01-15T23:51:15.283629594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd968bf64-ntvvk,Uid:a055550a-6ee1-4e86-b552-0ffc275a311d,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:15.283888 containerd[1898]: time="2026-01-15T23:51:15.283859980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kj8l,Uid:5ed7058e-8f78-4fb6-9780-d5925ccb54c0,Namespace:kube-system,Attempt:0,}" Jan 15 23:51:15.511612 containerd[1898]: time="2026-01-15T23:51:15.511057833Z" level=error msg="Failed to destroy network for sandbox \"b95818fcb57dc6a2c799098da17691466ebba2bd70d76b8691e44e090d64a436\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.517441 containerd[1898]: time="2026-01-15T23:51:15.517395967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4xs2,Uid:087bf8a4-3520-43d2-81c1-cb6f45055422,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95818fcb57dc6a2c799098da17691466ebba2bd70d76b8691e44e090d64a436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.518140 kubelet[3396]: E0115 23:51:15.518066 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95818fcb57dc6a2c799098da17691466ebba2bd70d76b8691e44e090d64a436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.518421 kubelet[3396]: E0115 23:51:15.518150 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95818fcb57dc6a2c799098da17691466ebba2bd70d76b8691e44e090d64a436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4xs2" Jan 15 23:51:15.518421 kubelet[3396]: E0115 23:51:15.518167 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95818fcb57dc6a2c799098da17691466ebba2bd70d76b8691e44e090d64a436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4xs2" Jan 15 23:51:15.518684 kubelet[3396]: E0115 23:51:15.518552 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g4xs2_kube-system(087bf8a4-3520-43d2-81c1-cb6f45055422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g4xs2_kube-system(087bf8a4-3520-43d2-81c1-cb6f45055422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b95818fcb57dc6a2c799098da17691466ebba2bd70d76b8691e44e090d64a436\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4xs2" podUID="087bf8a4-3520-43d2-81c1-cb6f45055422" Jan 15 23:51:15.535400 containerd[1898]: time="2026-01-15T23:51:15.535330940Z" level=error msg="Failed to destroy network for sandbox \"1744d3d52d2c6e60dd5d009c37613429ca260d12d4dc731fdae9018af9274d22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.536392 containerd[1898]: time="2026-01-15T23:51:15.536350000Z" level=error msg="Failed to destroy network for sandbox \"84f159722fd803a69f9b47d61b6c8ce9d0d1260db01efd0522e610e66e3a80e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.537442 containerd[1898]: time="2026-01-15T23:51:15.537366891Z" level=error msg="Failed to destroy network for sandbox \"70109753a2efca8d1763def51752e94614544c604cb9ef1c2f309dd147d7dd5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.539477 containerd[1898]: time="2026-01-15T23:51:15.539444932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f78775f8f-kvtpl,Uid:34651363-b157-4818-abe1-8475b0e41983,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1744d3d52d2c6e60dd5d009c37613429ca260d12d4dc731fdae9018af9274d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.539778 kubelet[3396]: E0115 23:51:15.539723 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1744d3d52d2c6e60dd5d009c37613429ca260d12d4dc731fdae9018af9274d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.539828 kubelet[3396]: E0115 23:51:15.539788 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1744d3d52d2c6e60dd5d009c37613429ca260d12d4dc731fdae9018af9274d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" Jan 15 23:51:15.539828 kubelet[3396]: E0115 23:51:15.539805 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1744d3d52d2c6e60dd5d009c37613429ca260d12d4dc731fdae9018af9274d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" Jan 15 23:51:15.539860 kubelet[3396]: E0115 23:51:15.539842 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1744d3d52d2c6e60dd5d009c37613429ca260d12d4dc731fdae9018af9274d22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:51:15.543014 containerd[1898]: time="2026-01-15T23:51:15.542978274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd968bf64-ntvvk,Uid:a055550a-6ee1-4e86-b552-0ffc275a311d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f159722fd803a69f9b47d61b6c8ce9d0d1260db01efd0522e610e66e3a80e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.543323 kubelet[3396]: E0115 23:51:15.543296 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f159722fd803a69f9b47d61b6c8ce9d0d1260db01efd0522e610e66e3a80e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.543447 kubelet[3396]: E0115 23:51:15.543415 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f159722fd803a69f9b47d61b6c8ce9d0d1260db01efd0522e610e66e3a80e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:15.543636 kubelet[3396]: E0115 23:51:15.543479 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f159722fd803a69f9b47d61b6c8ce9d0d1260db01efd0522e610e66e3a80e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:15.543698 kubelet[3396]: E0115 23:51:15.543596 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cd968bf64-ntvvk_calico-system(a055550a-6ee1-4e86-b552-0ffc275a311d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cd968bf64-ntvvk_calico-system(a055550a-6ee1-4e86-b552-0ffc275a311d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84f159722fd803a69f9b47d61b6c8ce9d0d1260db01efd0522e610e66e3a80e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cd968bf64-ntvvk" podUID="a055550a-6ee1-4e86-b552-0ffc275a311d" Jan 15 23:51:15.548001 containerd[1898]: time="2026-01-15T23:51:15.547953287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kj8l,Uid:5ed7058e-8f78-4fb6-9780-d5925ccb54c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70109753a2efca8d1763def51752e94614544c604cb9ef1c2f309dd147d7dd5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.548190 kubelet[3396]: E0115 23:51:15.548154 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70109753a2efca8d1763def51752e94614544c604cb9ef1c2f309dd147d7dd5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:15.548243 kubelet[3396]: E0115 23:51:15.548201 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70109753a2efca8d1763def51752e94614544c604cb9ef1c2f309dd147d7dd5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7kj8l" Jan 15 23:51:15.548243 kubelet[3396]: E0115 23:51:15.548215 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70109753a2efca8d1763def51752e94614544c604cb9ef1c2f309dd147d7dd5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7kj8l" Jan 15 23:51:15.548290 kubelet[3396]: E0115 23:51:15.548247 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7kj8l_kube-system(5ed7058e-8f78-4fb6-9780-d5925ccb54c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7kj8l_kube-system(5ed7058e-8f78-4fb6-9780-d5925ccb54c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70109753a2efca8d1763def51752e94614544c604cb9ef1c2f309dd147d7dd5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7kj8l" podUID="5ed7058e-8f78-4fb6-9780-d5925ccb54c0" Jan 15 23:51:15.831939 kubelet[3396]: E0115 23:51:15.831810 3396 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 15 23:51:15.831939 kubelet[3396]: E0115 23:51:15.831908 3396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e8e2e68-e4b7-4282-8047-eae44af7b067-goldmane-key-pair podName:1e8e2e68-e4b7-4282-8047-eae44af7b067 nodeName:}" failed. No retries permitted until 2026-01-15 23:51:16.331887231 +0000 UTC m=+32.343315917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/1e8e2e68-e4b7-4282-8047-eae44af7b067-goldmane-key-pair") pod "goldmane-666569f655-frkkk" (UID: "1e8e2e68-e4b7-4282-8047-eae44af7b067") : failed to sync secret cache: timed out waiting for the condition Jan 15 23:51:15.843524 kubelet[3396]: E0115 23:51:15.843303 3396 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 15 23:51:15.843524 kubelet[3396]: E0115 23:51:15.843341 3396 projected.go:194] Error preparing data for projected volume kube-api-access-zrt2j for pod calico-apiserver/calico-apiserver-6f777f44fd-jl6lj: failed to sync configmap cache: timed out waiting for the condition Jan 15 23:51:15.843524 kubelet[3396]: E0115 23:51:15.843402 3396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18982541-a4f1-43eb-9f61-620f19f486a0-kube-api-access-zrt2j podName:18982541-a4f1-43eb-9f61-620f19f486a0 nodeName:}" failed. No retries permitted until 2026-01-15 23:51:16.34338485 +0000 UTC m=+32.354813536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zrt2j" (UniqueName: "kubernetes.io/projected/18982541-a4f1-43eb-9f61-620f19f486a0-kube-api-access-zrt2j") pod "calico-apiserver-6f777f44fd-jl6lj" (UID: "18982541-a4f1-43eb-9f61-620f19f486a0") : failed to sync configmap cache: timed out waiting for the condition Jan 15 23:51:15.844511 kubelet[3396]: E0115 23:51:15.844394 3396 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 15 23:51:15.844511 kubelet[3396]: E0115 23:51:15.844427 3396 projected.go:194] Error preparing data for projected volume kube-api-access-p282p for pod calico-apiserver/calico-apiserver-6f777f44fd-2tvx8: failed to sync configmap cache: timed out waiting for the condition Jan 15 23:51:15.844511 kubelet[3396]: E0115 23:51:15.844472 3396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e-kube-api-access-p282p podName:9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e nodeName:}" failed. No retries permitted until 2026-01-15 23:51:16.34445796 +0000 UTC m=+32.355886646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p282p" (UniqueName: "kubernetes.io/projected/9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e-kube-api-access-p282p") pod "calico-apiserver-6f777f44fd-2tvx8" (UID: "9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e") : failed to sync configmap cache: timed out waiting for the condition Jan 15 23:51:16.143646 systemd[1]: Created slice kubepods-besteffort-pod51d6de9e_9409_4d26_91e2_95ebd2fa7a0f.slice - libcontainer container kubepods-besteffort-pod51d6de9e_9409_4d26_91e2_95ebd2fa7a0f.slice. Jan 15 23:51:16.146124 containerd[1898]: time="2026-01-15T23:51:16.146097099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjz87,Uid:51d6de9e-9409-4d26-91e2-95ebd2fa7a0f,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:16.194172 containerd[1898]: time="2026-01-15T23:51:16.194034480Z" level=error msg="Failed to destroy network for sandbox \"ab1bf8f55aa7c2694346a7960f0d89df1ab2f8ec0a5b1ffd7cf8e823428c0e63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.195667 systemd[1]: run-netns-cni\x2deae3a6bc\x2d9b70\x2dd66f\x2d57b2\x2d342f841c9c3a.mount: Deactivated successfully. Jan 15 23:51:16.198430 containerd[1898]: time="2026-01-15T23:51:16.197915966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjz87,Uid:51d6de9e-9409-4d26-91e2-95ebd2fa7a0f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1bf8f55aa7c2694346a7960f0d89df1ab2f8ec0a5b1ffd7cf8e823428c0e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.198609 kubelet[3396]: E0115 23:51:16.198554 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1bf8f55aa7c2694346a7960f0d89df1ab2f8ec0a5b1ffd7cf8e823428c0e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.198692 kubelet[3396]: E0115 23:51:16.198643 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1bf8f55aa7c2694346a7960f0d89df1ab2f8ec0a5b1ffd7cf8e823428c0e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:16.198692 kubelet[3396]: E0115 23:51:16.198660 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab1bf8f55aa7c2694346a7960f0d89df1ab2f8ec0a5b1ffd7cf8e823428c0e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjz87" Jan 15 23:51:16.198784 kubelet[3396]: E0115 23:51:16.198700 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab1bf8f55aa7c2694346a7960f0d89df1ab2f8ec0a5b1ffd7cf8e823428c0e63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:16.249385 containerd[1898]: time="2026-01-15T23:51:16.249322143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 23:51:16.464170 containerd[1898]: time="2026-01-15T23:51:16.463765323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-jl6lj,Uid:18982541-a4f1-43eb-9f61-620f19f486a0,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:51:16.484125 containerd[1898]: time="2026-01-15T23:51:16.484086102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-2tvx8,Uid:9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:51:16.484422 containerd[1898]: time="2026-01-15T23:51:16.484218460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-frkkk,Uid:1e8e2e68-e4b7-4282-8047-eae44af7b067,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:16.525834 containerd[1898]: time="2026-01-15T23:51:16.525712118Z" level=error msg="Failed to destroy network for sandbox \"16722aacf9fd2ba467ccd1c679c6c13385263ac45090c46022f5a61c1f6f3fb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.545436 containerd[1898]: time="2026-01-15T23:51:16.545387461Z" level=error msg="Failed to destroy network for sandbox \"438aa8f5c987f4126458ff63a0a925099ab474c3d64532ab844dfcf58e6ec5c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.553287 containerd[1898]: time="2026-01-15T23:51:16.553240908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-jl6lj,Uid:18982541-a4f1-43eb-9f61-620f19f486a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16722aacf9fd2ba467ccd1c679c6c13385263ac45090c46022f5a61c1f6f3fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.553865 kubelet[3396]: E0115 23:51:16.553828 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16722aacf9fd2ba467ccd1c679c6c13385263ac45090c46022f5a61c1f6f3fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.554533 kubelet[3396]: E0115 23:51:16.554212 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16722aacf9fd2ba467ccd1c679c6c13385263ac45090c46022f5a61c1f6f3fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" Jan 15 23:51:16.554533 kubelet[3396]: E0115 23:51:16.554240 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16722aacf9fd2ba467ccd1c679c6c13385263ac45090c46022f5a61c1f6f3fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" Jan 15 23:51:16.554533 kubelet[3396]: E0115 23:51:16.554288 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f777f44fd-jl6lj_calico-apiserver(18982541-a4f1-43eb-9f61-620f19f486a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f777f44fd-jl6lj_calico-apiserver(18982541-a4f1-43eb-9f61-620f19f486a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16722aacf9fd2ba467ccd1c679c6c13385263ac45090c46022f5a61c1f6f3fb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:51:16.558657 containerd[1898]: time="2026-01-15T23:51:16.558617673Z" level=error msg="Failed to destroy network for sandbox \"ddd7bc61e221c3ed00a98dbb40dc1c9ecd3ddd1c87f715a818d015381f712e54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.560449 containerd[1898]: time="2026-01-15T23:51:16.560342315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-frkkk,Uid:1e8e2e68-e4b7-4282-8047-eae44af7b067,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"438aa8f5c987f4126458ff63a0a925099ab474c3d64532ab844dfcf58e6ec5c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.560584 kubelet[3396]: E0115 23:51:16.560562 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438aa8f5c987f4126458ff63a0a925099ab474c3d64532ab844dfcf58e6ec5c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.560651 kubelet[3396]: E0115 23:51:16.560613 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438aa8f5c987f4126458ff63a0a925099ab474c3d64532ab844dfcf58e6ec5c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:16.560651 kubelet[3396]: E0115 23:51:16.560632 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438aa8f5c987f4126458ff63a0a925099ab474c3d64532ab844dfcf58e6ec5c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:16.560703 kubelet[3396]: E0115 23:51:16.560678 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438aa8f5c987f4126458ff63a0a925099ab474c3d64532ab844dfcf58e6ec5c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:51:16.565090 containerd[1898]: time="2026-01-15T23:51:16.564894861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-2tvx8,Uid:9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd7bc61e221c3ed00a98dbb40dc1c9ecd3ddd1c87f715a818d015381f712e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.565333 kubelet[3396]: E0115 23:51:16.565182 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd7bc61e221c3ed00a98dbb40dc1c9ecd3ddd1c87f715a818d015381f712e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:16.565333 kubelet[3396]: E0115 23:51:16.565240 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd7bc61e221c3ed00a98dbb40dc1c9ecd3ddd1c87f715a818d015381f712e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" Jan 15 23:51:16.565333 kubelet[3396]: E0115 23:51:16.565255 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd7bc61e221c3ed00a98dbb40dc1c9ecd3ddd1c87f715a818d015381f712e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" Jan 15 23:51:16.565585 kubelet[3396]: E0115 23:51:16.565362 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f777f44fd-2tvx8_calico-apiserver(9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f777f44fd-2tvx8_calico-apiserver(9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddd7bc61e221c3ed00a98dbb40dc1c9ecd3ddd1c87f715a818d015381f712e54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:51:24.790053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321424734.mount: Deactivated successfully. Jan 15 23:51:27.139280 containerd[1898]: time="2026-01-15T23:51:27.139225128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-frkkk,Uid:1e8e2e68-e4b7-4282-8047-eae44af7b067,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:27.139750 containerd[1898]: time="2026-01-15T23:51:27.139229313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd968bf64-ntvvk,Uid:a055550a-6ee1-4e86-b552-0ffc275a311d,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:29.139296 containerd[1898]: time="2026-01-15T23:51:29.139233035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4xs2,Uid:087bf8a4-3520-43d2-81c1-cb6f45055422,Namespace:kube-system,Attempt:0,}" Jan 15 23:51:29.852041 containerd[1898]: time="2026-01-15T23:51:29.851973237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:29.859617 containerd[1898]: time="2026-01-15T23:51:29.859481823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 15 23:51:29.862229 containerd[1898]: time="2026-01-15T23:51:29.862117618Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:29.867372 containerd[1898]: time="2026-01-15T23:51:29.866732313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:51:29.867372 containerd[1898]: time="2026-01-15T23:51:29.867148304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 13.616794373s" Jan 15 23:51:29.867372 containerd[1898]: time="2026-01-15T23:51:29.867170312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 15 23:51:29.885272 containerd[1898]: time="2026-01-15T23:51:29.885218982Z" level=info msg="CreateContainer within sandbox \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 23:51:29.912893 containerd[1898]: time="2026-01-15T23:51:29.912832726Z" level=error msg="Failed to destroy network for sandbox \"9a38fdd0e4f2d805e9c2e6c07bf170029fa21303e571063102c29c6b9cd521fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.920929 containerd[1898]: time="2026-01-15T23:51:29.920884291Z" level=info msg="Container 8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:29.924619 containerd[1898]: time="2026-01-15T23:51:29.924573715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-frkkk,Uid:1e8e2e68-e4b7-4282-8047-eae44af7b067,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a38fdd0e4f2d805e9c2e6c07bf170029fa21303e571063102c29c6b9cd521fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.926239 kubelet[3396]: E0115 23:51:29.925978 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a38fdd0e4f2d805e9c2e6c07bf170029fa21303e571063102c29c6b9cd521fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.926239 kubelet[3396]: E0115 23:51:29.926165 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a38fdd0e4f2d805e9c2e6c07bf170029fa21303e571063102c29c6b9cd521fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:29.926239 kubelet[3396]: E0115 23:51:29.926184 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a38fdd0e4f2d805e9c2e6c07bf170029fa21303e571063102c29c6b9cd521fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-frkkk" Jan 15 23:51:29.927066 kubelet[3396]: E0115 23:51:29.926559 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a38fdd0e4f2d805e9c2e6c07bf170029fa21303e571063102c29c6b9cd521fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:51:29.928665 containerd[1898]: time="2026-01-15T23:51:29.928606854Z" level=error msg="Failed to destroy network for sandbox \"55aab478b82e0326b57f82b3808953075d39fc2fe745d5a987fbf528b3cca34d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.932178 containerd[1898]: time="2026-01-15T23:51:29.932143839Z" level=error msg="Failed to destroy network for sandbox \"6476f1695897d0cea945da6ea1ae82b2ed875b33b96f24353f8e8759011e3641\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.938470 containerd[1898]: time="2026-01-15T23:51:29.938343805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd968bf64-ntvvk,Uid:a055550a-6ee1-4e86-b552-0ffc275a311d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aab478b82e0326b57f82b3808953075d39fc2fe745d5a987fbf528b3cca34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.938638 kubelet[3396]: E0115 23:51:29.938602 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aab478b82e0326b57f82b3808953075d39fc2fe745d5a987fbf528b3cca34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.938676 kubelet[3396]: E0115 23:51:29.938660 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aab478b82e0326b57f82b3808953075d39fc2fe745d5a987fbf528b3cca34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:29.938695 kubelet[3396]: E0115 23:51:29.938676 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aab478b82e0326b57f82b3808953075d39fc2fe745d5a987fbf528b3cca34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd968bf64-ntvvk" Jan 15 23:51:29.938725 kubelet[3396]: E0115 23:51:29.938709 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cd968bf64-ntvvk_calico-system(a055550a-6ee1-4e86-b552-0ffc275a311d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cd968bf64-ntvvk_calico-system(a055550a-6ee1-4e86-b552-0ffc275a311d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55aab478b82e0326b57f82b3808953075d39fc2fe745d5a987fbf528b3cca34d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cd968bf64-ntvvk" podUID="a055550a-6ee1-4e86-b552-0ffc275a311d" Jan 15 23:51:29.960498 containerd[1898]: time="2026-01-15T23:51:29.960352884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4xs2,Uid:087bf8a4-3520-43d2-81c1-cb6f45055422,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6476f1695897d0cea945da6ea1ae82b2ed875b33b96f24353f8e8759011e3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.960676 kubelet[3396]: E0115 23:51:29.960635 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6476f1695897d0cea945da6ea1ae82b2ed875b33b96f24353f8e8759011e3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:29.960711 kubelet[3396]: E0115 23:51:29.960701 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6476f1695897d0cea945da6ea1ae82b2ed875b33b96f24353f8e8759011e3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4xs2" Jan 15 23:51:29.960742 kubelet[3396]: E0115 23:51:29.960717 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6476f1695897d0cea945da6ea1ae82b2ed875b33b96f24353f8e8759011e3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4xs2" Jan 15 23:51:29.960798 kubelet[3396]: E0115 23:51:29.960755 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g4xs2_kube-system(087bf8a4-3520-43d2-81c1-cb6f45055422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g4xs2_kube-system(087bf8a4-3520-43d2-81c1-cb6f45055422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6476f1695897d0cea945da6ea1ae82b2ed875b33b96f24353f8e8759011e3641\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4xs2" podUID="087bf8a4-3520-43d2-81c1-cb6f45055422" Jan 15 23:51:30.009995 containerd[1898]: time="2026-01-15T23:51:30.009855749Z" level=info msg="CreateContainer within sandbox \"7dd73c0f489d00839131cb129be02b525246c83b7e12bbdc0eedc880af66ec1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223\"" Jan 15 23:51:30.010985 containerd[1898]: time="2026-01-15T23:51:30.010873352Z" level=info msg="StartContainer for \"8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223\"" Jan 15 23:51:30.012651 containerd[1898]: time="2026-01-15T23:51:30.012600124Z" level=info msg="connecting to shim 8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223" address="unix:///run/containerd/s/da979846a716fdba120cbe2a84e6412c063a3269e8302513db1c27237b2ba264" protocol=ttrpc version=3 Jan 15 23:51:30.030665 systemd[1]: Started cri-containerd-8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223.scope - libcontainer container 8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223. Jan 15 23:51:30.098702 containerd[1898]: time="2026-01-15T23:51:30.098604384Z" level=info msg="StartContainer for \"8568649dac56271174e0347da5f64feccbc04e36a03c4b940fe5bb2dd584c223\" returns successfully" Jan 15 23:51:30.140347 containerd[1898]: time="2026-01-15T23:51:30.140187329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f78775f8f-kvtpl,Uid:34651363-b157-4818-abe1-8475b0e41983,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:30.201973 containerd[1898]: time="2026-01-15T23:51:30.201917801Z" level=error msg="Failed to destroy network for sandbox \"38f063f7291bcfea6745be82246c23ed6d3e13f4acc64f684ff20a2f8b5c9d09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:30.209589 containerd[1898]: time="2026-01-15T23:51:30.209429145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f78775f8f-kvtpl,Uid:34651363-b157-4818-abe1-8475b0e41983,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f063f7291bcfea6745be82246c23ed6d3e13f4acc64f684ff20a2f8b5c9d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:30.210939 kubelet[3396]: E0115 23:51:30.210756 3396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f063f7291bcfea6745be82246c23ed6d3e13f4acc64f684ff20a2f8b5c9d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:51:30.210939 kubelet[3396]: E0115 23:51:30.210813 3396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f063f7291bcfea6745be82246c23ed6d3e13f4acc64f684ff20a2f8b5c9d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" Jan 15 23:51:30.210939 kubelet[3396]: E0115 23:51:30.210828 3396 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f063f7291bcfea6745be82246c23ed6d3e13f4acc64f684ff20a2f8b5c9d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" Jan 15 23:51:30.211068 kubelet[3396]: E0115 23:51:30.210862 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38f063f7291bcfea6745be82246c23ed6d3e13f4acc64f684ff20a2f8b5c9d09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:51:30.298027 kubelet[3396]: I0115 23:51:30.297926 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dhdvm" podStartSLOduration=0.949957541 podStartE2EDuration="24.297909339s" podCreationTimestamp="2026-01-15 23:51:06 +0000 UTC" firstStartedPulling="2026-01-15 23:51:06.520795769 +0000 UTC m=+22.532224455" lastFinishedPulling="2026-01-15 23:51:29.868747559 +0000 UTC m=+45.880176253" observedRunningTime="2026-01-15 23:51:30.297671417 +0000 UTC m=+46.309100111" watchObservedRunningTime="2026-01-15 23:51:30.297909339 +0000 UTC m=+46.309338033" Jan 15 23:51:30.312184 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 23:51:30.312305 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 23:51:30.518107 kubelet[3396]: I0115 23:51:30.517734 3396 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-backend-key-pair\") pod \"a055550a-6ee1-4e86-b552-0ffc275a311d\" (UID: \"a055550a-6ee1-4e86-b552-0ffc275a311d\") " Jan 15 23:51:30.518107 kubelet[3396]: I0115 23:51:30.517769 3396 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-ca-bundle\") pod \"a055550a-6ee1-4e86-b552-0ffc275a311d\" (UID: \"a055550a-6ee1-4e86-b552-0ffc275a311d\") " Jan 15 23:51:30.518107 kubelet[3396]: I0115 23:51:30.517798 3396 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r7kk\" (UniqueName: \"kubernetes.io/projected/a055550a-6ee1-4e86-b552-0ffc275a311d-kube-api-access-2r7kk\") pod \"a055550a-6ee1-4e86-b552-0ffc275a311d\" (UID: \"a055550a-6ee1-4e86-b552-0ffc275a311d\") " Jan 15 23:51:30.519173 kubelet[3396]: I0115 23:51:30.519046 3396 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a055550a-6ee1-4e86-b552-0ffc275a311d" (UID: "a055550a-6ee1-4e86-b552-0ffc275a311d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 23:51:30.522641 kubelet[3396]: I0115 23:51:30.522603 3396 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a055550a-6ee1-4e86-b552-0ffc275a311d-kube-api-access-2r7kk" (OuterVolumeSpecName: "kube-api-access-2r7kk") pod "a055550a-6ee1-4e86-b552-0ffc275a311d" (UID: "a055550a-6ee1-4e86-b552-0ffc275a311d"). InnerVolumeSpecName "kube-api-access-2r7kk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:51:30.523282 kubelet[3396]: I0115 23:51:30.523229 3396 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a055550a-6ee1-4e86-b552-0ffc275a311d" (UID: "a055550a-6ee1-4e86-b552-0ffc275a311d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 23:51:30.618809 kubelet[3396]: I0115 23:51:30.618763 3396 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-backend-key-pair\") on node \"ci-4459.2.2-n-5fd64d3fe1\" DevicePath \"\"" Jan 15 23:51:30.618809 kubelet[3396]: I0115 23:51:30.618802 3396 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a055550a-6ee1-4e86-b552-0ffc275a311d-whisker-ca-bundle\") on node \"ci-4459.2.2-n-5fd64d3fe1\" DevicePath \"\"" Jan 15 23:51:30.618809 kubelet[3396]: I0115 23:51:30.618809 3396 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2r7kk\" (UniqueName: \"kubernetes.io/projected/a055550a-6ee1-4e86-b552-0ffc275a311d-kube-api-access-2r7kk\") on node \"ci-4459.2.2-n-5fd64d3fe1\" DevicePath \"\"" Jan 15 23:51:30.817032 systemd[1]: run-netns-cni\x2deb486173\x2d59e1\x2d1f3a\x2d9311\x2d55613971f08b.mount: Deactivated successfully. Jan 15 23:51:30.817107 systemd[1]: run-netns-cni\x2d50738dbc\x2de8a8\x2d7023\x2da77d\x2d7500592b03b0.mount: Deactivated successfully. Jan 15 23:51:30.817145 systemd[1]: run-netns-cni\x2d2a49530d\x2d2e82\x2de074\x2dd1e2\x2d3fed08c5bbb5.mount: Deactivated successfully. Jan 15 23:51:30.817185 systemd[1]: var-lib-kubelet-pods-a055550a\x2d6ee1\x2d4e86\x2db552\x2d0ffc275a311d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2r7kk.mount: Deactivated successfully. Jan 15 23:51:30.817229 systemd[1]: var-lib-kubelet-pods-a055550a\x2d6ee1\x2d4e86\x2db552\x2d0ffc275a311d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 23:51:31.139882 containerd[1898]: time="2026-01-15T23:51:31.139587977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-2tvx8,Uid:9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:51:31.139882 containerd[1898]: time="2026-01-15T23:51:31.139695462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kj8l,Uid:5ed7058e-8f78-4fb6-9780-d5925ccb54c0,Namespace:kube-system,Attempt:0,}" Jan 15 23:51:31.140462 containerd[1898]: time="2026-01-15T23:51:31.140430420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjz87,Uid:51d6de9e-9409-4d26-91e2-95ebd2fa7a0f,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:31.285552 systemd[1]: Removed slice kubepods-besteffort-poda055550a_6ee1_4e86_b552_0ffc275a311d.slice - libcontainer container kubepods-besteffort-poda055550a_6ee1_4e86_b552_0ffc275a311d.slice. Jan 15 23:51:31.333095 systemd-networkd[1477]: cali96cbf732c34: Link UP Jan 15 23:51:31.334258 systemd-networkd[1477]: cali96cbf732c34: Gained carrier Jan 15 23:51:31.371111 containerd[1898]: 2026-01-15 23:51:31.183 [INFO][4618] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 23:51:31.371111 containerd[1898]: 2026-01-15 23:51:31.210 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0 csi-node-driver- calico-system 51d6de9e-9409-4d26-91e2-95ebd2fa7a0f 689 0 2026-01-15 23:51:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 csi-node-driver-jjz87 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali96cbf732c34 [] [] }} ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-" Jan 15 23:51:31.371111 containerd[1898]: 2026-01-15 23:51:31.210 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.371111 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4640] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" HandleID="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4640] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" HandleID="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"csi-node-driver-jjz87", "timestamp":"2026-01-15 23:51:31.245176739 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4640] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.253 [INFO][4640] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.257 [INFO][4640] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.261 [INFO][4640] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.263 [INFO][4640] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371361 containerd[1898]: 2026-01-15 23:51:31.264 [INFO][4640] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.264 [INFO][4640] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.266 [INFO][4640] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6 Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.275 [INFO][4640] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.288 [INFO][4640] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.65/26] block=192.168.109.64/26 handle="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.288 [INFO][4640] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.65/26] handle="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.288 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:31.371513 containerd[1898]: 2026-01-15 23:51:31.288 [INFO][4640] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.65/26] IPv6=[] ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" HandleID="k8s-pod-network.31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.371612 containerd[1898]: 2026-01-15 23:51:31.297 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"csi-node-driver-jjz87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali96cbf732c34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:31.371985 containerd[1898]: 2026-01-15 23:51:31.298 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.65/32] ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.371985 containerd[1898]: 2026-01-15 23:51:31.298 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96cbf732c34 ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.371985 containerd[1898]: 2026-01-15 23:51:31.334 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.372162 containerd[1898]: 2026-01-15 23:51:31.335 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51d6de9e-9409-4d26-91e2-95ebd2fa7a0f", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6", Pod:"csi-node-driver-jjz87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali96cbf732c34", MAC:"4e:a9:ac:df:d8:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:31.372575 containerd[1898]: 2026-01-15 23:51:31.367 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" Namespace="calico-system" Pod="csi-node-driver-jjz87" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-csi--node--driver--jjz87-eth0" Jan 15 23:51:31.386420 systemd[1]: Created slice kubepods-besteffort-pod15c0f7bb_159e_4e16_a598_2453bb733d6e.slice - libcontainer container kubepods-besteffort-pod15c0f7bb_159e_4e16_a598_2453bb733d6e.slice. Jan 15 23:51:31.424892 kubelet[3396]: I0115 23:51:31.424146 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15c0f7bb-159e-4e16-a598-2453bb733d6e-whisker-backend-key-pair\") pod \"whisker-6cfc4fdc86-ln6hf\" (UID: \"15c0f7bb-159e-4e16-a598-2453bb733d6e\") " pod="calico-system/whisker-6cfc4fdc86-ln6hf" Jan 15 23:51:31.424892 kubelet[3396]: I0115 23:51:31.424182 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwsj8\" (UniqueName: \"kubernetes.io/projected/15c0f7bb-159e-4e16-a598-2453bb733d6e-kube-api-access-vwsj8\") pod \"whisker-6cfc4fdc86-ln6hf\" (UID: \"15c0f7bb-159e-4e16-a598-2453bb733d6e\") " pod="calico-system/whisker-6cfc4fdc86-ln6hf" Jan 15 23:51:31.424892 kubelet[3396]: I0115 23:51:31.424212 3396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15c0f7bb-159e-4e16-a598-2453bb733d6e-whisker-ca-bundle\") pod \"whisker-6cfc4fdc86-ln6hf\" (UID: \"15c0f7bb-159e-4e16-a598-2453bb733d6e\") " pod="calico-system/whisker-6cfc4fdc86-ln6hf" Jan 15 23:51:31.427880 containerd[1898]: time="2026-01-15T23:51:31.427685989Z" level=info msg="connecting to shim 31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6" address="unix:///run/containerd/s/b0ed5233cd20a7994fe68bfee5d12bab9a7fade7982886f84e487dc546d6c3a5" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:31.449725 systemd[1]: Started cri-containerd-31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6.scope - libcontainer container 31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6. Jan 15 23:51:31.463158 systemd-networkd[1477]: cali4291d7f41c8: Link UP Jan 15 23:51:31.464611 systemd-networkd[1477]: cali4291d7f41c8: Gained carrier Jan 15 23:51:31.489068 containerd[1898]: 2026-01-15 23:51:31.179 [INFO][4598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 23:51:31.489068 containerd[1898]: 2026-01-15 23:51:31.210 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0 calico-apiserver-6f777f44fd- calico-apiserver 9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e 802 0 2026-01-15 23:51:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f777f44fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 calico-apiserver-6f777f44fd-2tvx8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4291d7f41c8 [] [] }} ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-" Jan 15 23:51:31.489068 containerd[1898]: 2026-01-15 23:51:31.210 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.489068 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4638] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" HandleID="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4638] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" HandleID="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"calico-apiserver-6f777f44fd-2tvx8", "timestamp":"2026-01-15 23:51:31.24500814 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.245 [INFO][4638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.289 [INFO][4638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.289 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.353 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.391 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.416 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.427 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489368 containerd[1898]: 2026-01-15 23:51:31.430 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.430 [INFO][4638] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.432 [INFO][4638] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0 Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.436 [INFO][4638] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.448 [INFO][4638] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.66/26] block=192.168.109.64/26 handle="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.448 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.66/26] handle="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.448 [INFO][4638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:31.489960 containerd[1898]: 2026-01-15 23:51:31.448 [INFO][4638] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.66/26] IPv6=[] ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" HandleID="k8s-pod-network.47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.490258 containerd[1898]: 2026-01-15 23:51:31.454 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0", GenerateName:"calico-apiserver-6f777f44fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f777f44fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"calico-apiserver-6f777f44fd-2tvx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4291d7f41c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:31.490306 containerd[1898]: 2026-01-15 23:51:31.454 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.66/32] ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.490306 containerd[1898]: 2026-01-15 23:51:31.455 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4291d7f41c8 ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.490306 containerd[1898]: 2026-01-15 23:51:31.467 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.490349 containerd[1898]: 2026-01-15 23:51:31.468 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0", GenerateName:"calico-apiserver-6f777f44fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f777f44fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0", Pod:"calico-apiserver-6f777f44fd-2tvx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4291d7f41c8", MAC:"16:98:19:46:bb:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:31.490380 containerd[1898]: 2026-01-15 23:51:31.487 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-2tvx8" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--2tvx8-eth0" Jan 15 23:51:31.497461 containerd[1898]: time="2026-01-15T23:51:31.497423027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjz87,Uid:51d6de9e-9409-4d26-91e2-95ebd2fa7a0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"31c76a51b893ecfbb05f323b782ca3a0cda4fd668e05ffcfd2f5cda2e93262c6\"" Jan 15 23:51:31.499329 containerd[1898]: time="2026-01-15T23:51:31.499103633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:51:31.546442 systemd-networkd[1477]: cali203e4211090: Link UP Jan 15 23:51:31.546633 systemd-networkd[1477]: cali203e4211090: Gained carrier Jan 15 23:51:31.559726 containerd[1898]: time="2026-01-15T23:51:31.559674993Z" level=info msg="connecting to shim 47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0" address="unix:///run/containerd/s/4b4fcadd17d445747fd86e41ed3e229be42f7d5d8418c3eebc68acf6770bad47" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:31.571270 containerd[1898]: 2026-01-15 23:51:31.193 [INFO][4607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 23:51:31.571270 containerd[1898]: 2026-01-15 23:51:31.210 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0 coredns-668d6bf9bc- kube-system 5ed7058e-8f78-4fb6-9780-d5925ccb54c0 795 0 2026-01-15 23:50:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 coredns-668d6bf9bc-7kj8l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali203e4211090 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-" Jan 15 23:51:31.571270 containerd[1898]: 2026-01-15 23:51:31.210 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.571270 containerd[1898]: 2026-01-15 23:51:31.253 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" HandleID="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.253 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" HandleID="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"coredns-668d6bf9bc-7kj8l", "timestamp":"2026-01-15 23:51:31.253305573 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.253 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.448 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.448 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.464 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.485 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.513 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.515 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572220 containerd[1898]: 2026-01-15 23:51:31.517 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.517 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.519 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782 Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.530 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.540 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.67/26] block=192.168.109.64/26 handle="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.540 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.67/26] handle="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.540 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:31.572396 containerd[1898]: 2026-01-15 23:51:31.540 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.67/26] IPv6=[] ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" HandleID="k8s-pod-network.be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.572502 containerd[1898]: 2026-01-15 23:51:31.543 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ed7058e-8f78-4fb6-9780-d5925ccb54c0", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 50, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"coredns-668d6bf9bc-7kj8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali203e4211090", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:31.572502 containerd[1898]: 2026-01-15 23:51:31.543 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.67/32] ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.572502 containerd[1898]: 2026-01-15 23:51:31.543 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali203e4211090 ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.572502 containerd[1898]: 2026-01-15 23:51:31.545 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.572502 containerd[1898]: 2026-01-15 23:51:31.546 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5ed7058e-8f78-4fb6-9780-d5925ccb54c0", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 50, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782", Pod:"coredns-668d6bf9bc-7kj8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali203e4211090", MAC:"fe:14:21:f5:97:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:31.572502 containerd[1898]: 2026-01-15 23:51:31.567 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" Namespace="kube-system" Pod="coredns-668d6bf9bc-7kj8l" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--7kj8l-eth0" Jan 15 23:51:31.587668 systemd[1]: Started cri-containerd-47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0.scope - libcontainer container 47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0. Jan 15 23:51:31.691958 containerd[1898]: time="2026-01-15T23:51:31.691806635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cfc4fdc86-ln6hf,Uid:15c0f7bb-159e-4e16-a598-2453bb733d6e,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:31.800377 containerd[1898]: time="2026-01-15T23:51:31.800328983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:32.139388 containerd[1898]: time="2026-01-15T23:51:32.138802603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-jl6lj,Uid:18982541-a4f1-43eb-9f61-620f19f486a0,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:51:32.141588 kubelet[3396]: I0115 23:51:32.141543 3396 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a055550a-6ee1-4e86-b552-0ffc275a311d" path="/var/lib/kubelet/pods/a055550a-6ee1-4e86-b552-0ffc275a311d/volumes" Jan 15 23:51:32.394731 systemd-networkd[1477]: cali96cbf732c34: Gained IPv6LL Jan 15 23:51:32.736744 systemd-networkd[1477]: vxlan.calico: Link UP Jan 15 23:51:32.736751 systemd-networkd[1477]: vxlan.calico: Gained carrier Jan 15 23:51:33.034730 systemd-networkd[1477]: cali203e4211090: Gained IPv6LL Jan 15 23:51:33.059248 containerd[1898]: time="2026-01-15T23:51:33.059134626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-2tvx8,Uid:9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"47c29bfce373255c585f1a38e34e71078611f29794d2f3f4578486c483a1d9e0\"" Jan 15 23:51:33.227004 systemd-networkd[1477]: cali4291d7f41c8: Gained IPv6LL Jan 15 23:51:33.866640 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Jan 15 23:51:33.960950 containerd[1898]: time="2026-01-15T23:51:33.960814058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:51:33.960950 containerd[1898]: time="2026-01-15T23:51:33.960922054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:51:33.961470 kubelet[3396]: E0115 23:51:33.961170 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:51:33.961470 kubelet[3396]: E0115 23:51:33.961229 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:51:33.962416 containerd[1898]: time="2026-01-15T23:51:33.962262491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:51:33.966919 kubelet[3396]: E0115 23:51:33.966840 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:35.161691 containerd[1898]: time="2026-01-15T23:51:35.161639788Z" level=info msg="connecting to shim be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782" address="unix:///run/containerd/s/d4bcbf7b951982bd8e1a2abdf344afaf155198eb3f0649df3afaf1d9109f2213" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:35.183676 systemd[1]: Started cri-containerd-be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782.scope - libcontainer container be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782. Jan 15 23:51:36.367411 containerd[1898]: time="2026-01-15T23:51:36.366931704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kj8l,Uid:5ed7058e-8f78-4fb6-9780-d5925ccb54c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782\"" Jan 15 23:51:36.372624 containerd[1898]: time="2026-01-15T23:51:36.372583314Z" level=info msg="CreateContainer within sandbox \"be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:51:36.423562 systemd-networkd[1477]: calif4a7351bb03: Link UP Jan 15 23:51:36.427906 systemd-networkd[1477]: calif4a7351bb03: Gained carrier Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.325 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0 whisker-6cfc4fdc86- calico-system 15c0f7bb-159e-4e16-a598-2453bb733d6e 891 0 2026-01-15 23:51:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cfc4fdc86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 whisker-6cfc4fdc86-ln6hf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif4a7351bb03 [] [] }} ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.325 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.353 [INFO][5044] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" HandleID="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.354 [INFO][5044] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" HandleID="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"whisker-6cfc4fdc86-ln6hf", "timestamp":"2026-01-15 23:51:36.353969898 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.354 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.354 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.354 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.361 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.367 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.373 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.379 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.387 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.387 [INFO][5044] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.392 [INFO][5044] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.401 [INFO][5044] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.414 [INFO][5044] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.68/26] block=192.168.109.64/26 handle="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.414 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.68/26] handle="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.414 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:36.448343 containerd[1898]: 2026-01-15 23:51:36.415 [INFO][5044] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.68/26] IPv6=[] ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" HandleID="k8s-pod-network.db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.449215 containerd[1898]: 2026-01-15 23:51:36.418 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0", GenerateName:"whisker-6cfc4fdc86-", Namespace:"calico-system", SelfLink:"", UID:"15c0f7bb-159e-4e16-a598-2453bb733d6e", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cfc4fdc86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"whisker-6cfc4fdc86-ln6hf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif4a7351bb03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:36.449215 containerd[1898]: 2026-01-15 23:51:36.418 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.68/32] ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.449215 containerd[1898]: 2026-01-15 23:51:36.418 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4a7351bb03 ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.449215 containerd[1898]: 2026-01-15 23:51:36.424 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.449215 containerd[1898]: 2026-01-15 23:51:36.425 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0", GenerateName:"whisker-6cfc4fdc86-", Namespace:"calico-system", SelfLink:"", UID:"15c0f7bb-159e-4e16-a598-2453bb733d6e", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cfc4fdc86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe", Pod:"whisker-6cfc4fdc86-ln6hf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif4a7351bb03", MAC:"12:4c:f3:cc:8f:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:36.449215 containerd[1898]: 2026-01-15 23:51:36.440 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" Namespace="calico-system" Pod="whisker-6cfc4fdc86-ln6hf" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-whisker--6cfc4fdc86--ln6hf-eth0" Jan 15 23:51:36.519162 containerd[1898]: time="2026-01-15T23:51:36.519115363Z" level=info msg="Container ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:36.523029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778452282.mount: Deactivated successfully. Jan 15 23:51:36.532583 systemd-networkd[1477]: calib06d8ce0817: Link UP Jan 15 23:51:36.533980 systemd-networkd[1477]: calib06d8ce0817: Gained carrier Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.441 [INFO][5054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0 calico-apiserver-6f777f44fd- calico-apiserver 18982541-a4f1-43eb-9f61-620f19f486a0 801 0 2026-01-15 23:51:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f777f44fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 calico-apiserver-6f777f44fd-jl6lj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib06d8ce0817 [] [] }} ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.441 [INFO][5054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.475 [INFO][5074] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" HandleID="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.475 [INFO][5074] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" HandleID="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"calico-apiserver-6f777f44fd-jl6lj", "timestamp":"2026-01-15 23:51:36.475436692 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.475 [INFO][5074] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.475 [INFO][5074] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.475 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.483 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.487 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.493 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.496 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.498 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.498 [INFO][5074] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.501 [INFO][5074] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0 Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.510 [INFO][5074] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.526 [INFO][5074] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.69/26] block=192.168.109.64/26 handle="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.526 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.69/26] handle="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.526 [INFO][5074] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:36.555496 containerd[1898]: 2026-01-15 23:51:36.526 [INFO][5074] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.69/26] IPv6=[] ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" HandleID="k8s-pod-network.49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.556126 containerd[1898]: 2026-01-15 23:51:36.529 [INFO][5054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0", GenerateName:"calico-apiserver-6f777f44fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"18982541-a4f1-43eb-9f61-620f19f486a0", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f777f44fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"calico-apiserver-6f777f44fd-jl6lj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib06d8ce0817", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:36.556126 containerd[1898]: 2026-01-15 23:51:36.529 [INFO][5054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.69/32] ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.556126 containerd[1898]: 2026-01-15 23:51:36.529 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib06d8ce0817 ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.556126 containerd[1898]: 2026-01-15 23:51:36.533 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.556126 containerd[1898]: 2026-01-15 23:51:36.533 [INFO][5054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0", GenerateName:"calico-apiserver-6f777f44fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"18982541-a4f1-43eb-9f61-620f19f486a0", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f777f44fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0", Pod:"calico-apiserver-6f777f44fd-jl6lj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib06d8ce0817", MAC:"ca:9b:66:1b:c3:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:36.556126 containerd[1898]: 2026-01-15 23:51:36.551 [INFO][5054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" Namespace="calico-apiserver" Pod="calico-apiserver-6f777f44fd-jl6lj" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--apiserver--6f777f44fd--jl6lj-eth0" Jan 15 23:51:36.561481 containerd[1898]: time="2026-01-15T23:51:36.561430341Z" level=info msg="CreateContainer within sandbox \"be43e74a5dd9c09a1ab66c817591f92f8f873a8394876b9e6afd87fee26a2782\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a\"" Jan 15 23:51:36.568598 containerd[1898]: time="2026-01-15T23:51:36.567790770Z" level=info msg="StartContainer for \"ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a\"" Jan 15 23:51:36.569723 containerd[1898]: time="2026-01-15T23:51:36.569677398Z" level=info msg="connecting to shim ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a" address="unix:///run/containerd/s/d4bcbf7b951982bd8e1a2abdf344afaf155198eb3f0649df3afaf1d9109f2213" protocol=ttrpc version=3 Jan 15 23:51:36.593960 containerd[1898]: time="2026-01-15T23:51:36.593877020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:36.594689 systemd[1]: Started cri-containerd-ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a.scope - libcontainer container ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a. Jan 15 23:51:36.629323 containerd[1898]: time="2026-01-15T23:51:36.628971181Z" level=info msg="connecting to shim db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe" address="unix:///run/containerd/s/5fd1bd8dc54080621c02018d9c904667a50a70f08dbf45f95b025fc385244a24" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:36.664028 containerd[1898]: time="2026-01-15T23:51:36.663989699Z" level=info msg="StartContainer for \"ecf4813cd23510b9a5d6d41cd559457cbad278b900af017e2043d197155c5c4a\" returns successfully" Jan 15 23:51:36.665775 systemd[1]: Started cri-containerd-db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe.scope - libcontainer container db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe. Jan 15 23:51:36.669464 containerd[1898]: time="2026-01-15T23:51:36.669306391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:51:36.669464 containerd[1898]: time="2026-01-15T23:51:36.669415235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:36.670009 kubelet[3396]: E0115 23:51:36.669966 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:36.672192 kubelet[3396]: E0115 23:51:36.670021 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:36.672192 kubelet[3396]: E0115 23:51:36.670231 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p282p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-2tvx8_calico-apiserver(9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:36.672192 kubelet[3396]: E0115 23:51:36.671597 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:51:36.672807 containerd[1898]: time="2026-01-15T23:51:36.670741608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:51:36.760667 containerd[1898]: time="2026-01-15T23:51:36.760623500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cfc4fdc86-ln6hf,Uid:15c0f7bb-159e-4e16-a598-2453bb733d6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"db7439fddae8f08655c082cedd39f6dca5d8bbbe605f3559dc80b685ab9cf6fe\"" Jan 15 23:51:36.785227 containerd[1898]: time="2026-01-15T23:51:36.785172504Z" level=info msg="connecting to shim 49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0" address="unix:///run/containerd/s/9070d8ab6007feee9b1d8bdd0457ec4e1b5e20e3369419a75cee694c747c5721" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:36.800666 systemd[1]: Started cri-containerd-49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0.scope - libcontainer container 49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0. Jan 15 23:51:36.835506 containerd[1898]: time="2026-01-15T23:51:36.835451840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f777f44fd-jl6lj,Uid:18982541-a4f1-43eb-9f61-620f19f486a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"49339cdb645f7594f69f5499760f4dff465add26751af46f1d73d10e192b60c0\"" Jan 15 23:51:37.020673 containerd[1898]: time="2026-01-15T23:51:37.020617992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:37.024698 containerd[1898]: time="2026-01-15T23:51:37.024631280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:51:37.024859 containerd[1898]: time="2026-01-15T23:51:37.024672633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:51:37.025145 kubelet[3396]: E0115 23:51:37.025010 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:51:37.025234 kubelet[3396]: E0115 23:51:37.025163 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:51:37.025587 kubelet[3396]: E0115 23:51:37.025544 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:37.026922 kubelet[3396]: E0115 23:51:37.026771 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:37.027600 containerd[1898]: time="2026-01-15T23:51:37.027562181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:51:37.259906 containerd[1898]: time="2026-01-15T23:51:37.259840398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:37.263968 containerd[1898]: time="2026-01-15T23:51:37.263903424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:51:37.264208 containerd[1898]: time="2026-01-15T23:51:37.263943193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:51:37.264405 kubelet[3396]: E0115 23:51:37.264371 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:51:37.264550 kubelet[3396]: E0115 23:51:37.264534 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:51:37.264805 kubelet[3396]: E0115 23:51:37.264756 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2b5df9145bd241b1b43c1fd247d9a698,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:37.265408 containerd[1898]: time="2026-01-15T23:51:37.264950410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:51:37.303801 kubelet[3396]: E0115 23:51:37.303409 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:51:37.306290 kubelet[3396]: E0115 23:51:37.306103 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:37.339642 kubelet[3396]: I0115 23:51:37.339577 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7kj8l" podStartSLOduration=47.339558252 podStartE2EDuration="47.339558252s" podCreationTimestamp="2026-01-15 23:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:51:37.318380383 +0000 UTC m=+53.329809069" watchObservedRunningTime="2026-01-15 23:51:37.339558252 +0000 UTC m=+53.350986946" Jan 15 23:51:37.518910 containerd[1898]: time="2026-01-15T23:51:37.518853561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:37.522270 containerd[1898]: time="2026-01-15T23:51:37.522180990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:51:37.522270 containerd[1898]: time="2026-01-15T23:51:37.522232792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:37.522495 kubelet[3396]: E0115 23:51:37.522439 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:37.522560 kubelet[3396]: E0115 23:51:37.522517 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:37.522802 kubelet[3396]: E0115 23:51:37.522770 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-jl6lj_calico-apiserver(18982541-a4f1-43eb-9f61-620f19f486a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:37.523362 containerd[1898]: time="2026-01-15T23:51:37.523330052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:51:37.524060 kubelet[3396]: E0115 23:51:37.524029 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:51:37.779460 containerd[1898]: time="2026-01-15T23:51:37.779258557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:37.783996 containerd[1898]: time="2026-01-15T23:51:37.783917551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:51:37.784233 containerd[1898]: time="2026-01-15T23:51:37.783982561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:51:37.784756 kubelet[3396]: E0115 23:51:37.784435 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:51:37.784756 kubelet[3396]: E0115 23:51:37.784613 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:51:37.785323 kubelet[3396]: E0115 23:51:37.785147 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:37.786593 kubelet[3396]: E0115 23:51:37.786563 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:51:38.154640 systemd-networkd[1477]: calib06d8ce0817: Gained IPv6LL Jan 15 23:51:38.307712 kubelet[3396]: E0115 23:51:38.307671 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:51:38.309016 kubelet[3396]: E0115 23:51:38.308578 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:51:38.347026 systemd-networkd[1477]: calif4a7351bb03: Gained IPv6LL Jan 15 23:51:42.139515 containerd[1898]: time="2026-01-15T23:51:42.139066458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4xs2,Uid:087bf8a4-3520-43d2-81c1-cb6f45055422,Namespace:kube-system,Attempt:0,}" Jan 15 23:51:42.294396 systemd-networkd[1477]: cali2ede1f0bd3a: Link UP Jan 15 23:51:42.296722 systemd-networkd[1477]: cali2ede1f0bd3a: Gained carrier Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.183 [INFO][5234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0 coredns-668d6bf9bc- kube-system 087bf8a4-3520-43d2-81c1-cb6f45055422 791 0 2026-01-15 23:50:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 coredns-668d6bf9bc-g4xs2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ede1f0bd3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.184 [INFO][5234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.223 [INFO][5246] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" HandleID="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.224 [INFO][5246] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" HandleID="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"coredns-668d6bf9bc-g4xs2", "timestamp":"2026-01-15 23:51:42.223993261 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.224 [INFO][5246] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.224 [INFO][5246] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.224 [INFO][5246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.232 [INFO][5246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.239 [INFO][5246] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.253 [INFO][5246] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.255 [INFO][5246] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.261 [INFO][5246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.261 [INFO][5246] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.265 [INFO][5246] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7 Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.271 [INFO][5246] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.286 [INFO][5246] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.70/26] block=192.168.109.64/26 handle="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.286 [INFO][5246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.70/26] handle="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.286 [INFO][5246] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:42.334094 containerd[1898]: 2026-01-15 23:51:42.287 [INFO][5246] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.70/26] IPv6=[] ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" HandleID="k8s-pod-network.18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.334727 containerd[1898]: 2026-01-15 23:51:42.290 [INFO][5234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"087bf8a4-3520-43d2-81c1-cb6f45055422", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 50, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"coredns-668d6bf9bc-g4xs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ede1f0bd3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:42.334727 containerd[1898]: 2026-01-15 23:51:42.290 [INFO][5234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.70/32] ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.334727 containerd[1898]: 2026-01-15 23:51:42.290 [INFO][5234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ede1f0bd3a ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.334727 containerd[1898]: 2026-01-15 23:51:42.297 [INFO][5234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.334727 containerd[1898]: 2026-01-15 23:51:42.300 [INFO][5234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"087bf8a4-3520-43d2-81c1-cb6f45055422", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 50, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7", Pod:"coredns-668d6bf9bc-g4xs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ede1f0bd3a", MAC:"fa:29:bf:bd:09:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:42.334727 containerd[1898]: 2026-01-15 23:51:42.329 [INFO][5234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4xs2" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-coredns--668d6bf9bc--g4xs2-eth0" Jan 15 23:51:42.396002 containerd[1898]: time="2026-01-15T23:51:42.395470914Z" level=info msg="connecting to shim 18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7" address="unix:///run/containerd/s/aa8682575f57efeebc0ad9d2fc69d892d42cc0b68c329006aac3d502bf5c1e55" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:42.424722 systemd[1]: Started cri-containerd-18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7.scope - libcontainer container 18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7. Jan 15 23:51:42.475135 containerd[1898]: time="2026-01-15T23:51:42.475090457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4xs2,Uid:087bf8a4-3520-43d2-81c1-cb6f45055422,Namespace:kube-system,Attempt:0,} returns sandbox id \"18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7\"" Jan 15 23:51:42.479525 containerd[1898]: time="2026-01-15T23:51:42.479442256Z" level=info msg="CreateContainer within sandbox \"18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:51:42.508302 containerd[1898]: time="2026-01-15T23:51:42.506901385Z" level=info msg="Container b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:42.526364 containerd[1898]: time="2026-01-15T23:51:42.526319516Z" level=info msg="CreateContainer within sandbox \"18eb5d5cc48ed30b15bcaf4f8db9d2eb32cb819d032b46a5e166ad331cde41b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed\"" Jan 15 23:51:42.527345 containerd[1898]: time="2026-01-15T23:51:42.527268432Z" level=info msg="StartContainer for \"b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed\"" Jan 15 23:51:42.529381 containerd[1898]: time="2026-01-15T23:51:42.529349744Z" level=info msg="connecting to shim b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed" address="unix:///run/containerd/s/aa8682575f57efeebc0ad9d2fc69d892d42cc0b68c329006aac3d502bf5c1e55" protocol=ttrpc version=3 Jan 15 23:51:42.550855 systemd[1]: Started cri-containerd-b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed.scope - libcontainer container b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed. Jan 15 23:51:42.595715 containerd[1898]: time="2026-01-15T23:51:42.595670784Z" level=info msg="StartContainer for \"b987f4269a75ed2e98594f876c5c07f302b13fc9180d20fa5b8c885f2e48f2ed\" returns successfully" Jan 15 23:51:43.355166 kubelet[3396]: I0115 23:51:43.355094 3396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g4xs2" podStartSLOduration=53.355048885 podStartE2EDuration="53.355048885s" podCreationTimestamp="2026-01-15 23:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:51:43.336607184 +0000 UTC m=+59.348035870" watchObservedRunningTime="2026-01-15 23:51:43.355048885 +0000 UTC m=+59.366477579" Jan 15 23:51:43.786634 systemd-networkd[1477]: cali2ede1f0bd3a: Gained IPv6LL Jan 15 23:51:44.140575 containerd[1898]: time="2026-01-15T23:51:44.140100638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f78775f8f-kvtpl,Uid:34651363-b157-4818-abe1-8475b0e41983,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:44.264667 systemd-networkd[1477]: cali577516e481a: Link UP Jan 15 23:51:44.265649 systemd-networkd[1477]: cali577516e481a: Gained carrier Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.190 [INFO][5355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0 calico-kube-controllers-f78775f8f- calico-system 34651363-b157-4818-abe1-8475b0e41983 803 0 2026-01-15 23:51:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f78775f8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 calico-kube-controllers-f78775f8f-kvtpl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali577516e481a [] [] }} ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.190 [INFO][5355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.221 [INFO][5367] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" HandleID="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.221 [INFO][5367] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" HandleID="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"calico-kube-controllers-f78775f8f-kvtpl", "timestamp":"2026-01-15 23:51:44.220994646 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.221 [INFO][5367] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.221 [INFO][5367] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.221 [INFO][5367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.227 [INFO][5367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.232 [INFO][5367] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.237 [INFO][5367] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.239 [INFO][5367] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.241 [INFO][5367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.241 [INFO][5367] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.242 [INFO][5367] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9 Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.249 [INFO][5367] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.258 [INFO][5367] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.71/26] block=192.168.109.64/26 handle="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.259 [INFO][5367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.71/26] handle="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.259 [INFO][5367] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:44.285446 containerd[1898]: 2026-01-15 23:51:44.259 [INFO][5367] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.71/26] IPv6=[] ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" HandleID="k8s-pod-network.f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.286205 containerd[1898]: 2026-01-15 23:51:44.261 [INFO][5355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0", GenerateName:"calico-kube-controllers-f78775f8f-", Namespace:"calico-system", SelfLink:"", UID:"34651363-b157-4818-abe1-8475b0e41983", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f78775f8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"calico-kube-controllers-f78775f8f-kvtpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali577516e481a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:44.286205 containerd[1898]: 2026-01-15 23:51:44.261 [INFO][5355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.71/32] ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.286205 containerd[1898]: 2026-01-15 23:51:44.261 [INFO][5355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali577516e481a ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.286205 containerd[1898]: 2026-01-15 23:51:44.266 [INFO][5355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.286205 containerd[1898]: 2026-01-15 23:51:44.266 [INFO][5355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0", GenerateName:"calico-kube-controllers-f78775f8f-", Namespace:"calico-system", SelfLink:"", UID:"34651363-b157-4818-abe1-8475b0e41983", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f78775f8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9", Pod:"calico-kube-controllers-f78775f8f-kvtpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali577516e481a", MAC:"5e:e6:4d:8d:fe:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:44.286205 containerd[1898]: 2026-01-15 23:51:44.281 [INFO][5355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" Namespace="calico-system" Pod="calico-kube-controllers-f78775f8f-kvtpl" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-calico--kube--controllers--f78775f8f--kvtpl-eth0" Jan 15 23:51:44.346082 containerd[1898]: time="2026-01-15T23:51:44.345944053Z" level=info msg="connecting to shim f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9" address="unix:///run/containerd/s/a2102065907b6361f51942d9a7c74cc3bc5042fc68be65d5fbdabad7df0d01e2" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:44.368703 systemd[1]: Started cri-containerd-f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9.scope - libcontainer container f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9. Jan 15 23:51:44.406151 containerd[1898]: time="2026-01-15T23:51:44.405959009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f78775f8f-kvtpl,Uid:34651363-b157-4818-abe1-8475b0e41983,Namespace:calico-system,Attempt:0,} returns sandbox id \"f28761cb6e9e6a29c9590f9628c69ef7564a6859a675a1e399ee70b7d3619fd9\"" Jan 15 23:51:44.407658 containerd[1898]: time="2026-01-15T23:51:44.407624706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:51:44.688732 containerd[1898]: time="2026-01-15T23:51:44.688594266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:44.693182 containerd[1898]: time="2026-01-15T23:51:44.693127225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:51:44.693287 containerd[1898]: time="2026-01-15T23:51:44.693240061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:51:44.693770 kubelet[3396]: E0115 23:51:44.693458 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:51:44.693770 kubelet[3396]: E0115 23:51:44.693555 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:51:44.693770 kubelet[3396]: E0115 23:51:44.693668 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6fnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:44.694924 kubelet[3396]: E0115 23:51:44.694865 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:51:45.139375 containerd[1898]: time="2026-01-15T23:51:45.139331838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-frkkk,Uid:1e8e2e68-e4b7-4282-8047-eae44af7b067,Namespace:calico-system,Attempt:0,}" Jan 15 23:51:45.244576 systemd-networkd[1477]: cali03716128475: Link UP Jan 15 23:51:45.245450 systemd-networkd[1477]: cali03716128475: Gained carrier Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.175 [INFO][5426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0 goldmane-666569f655- calico-system 1e8e2e68-e4b7-4282-8047-eae44af7b067 800 0 2026-01-15 23:51:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-n-5fd64d3fe1 goldmane-666569f655-frkkk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali03716128475 [] [] }} ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.175 [INFO][5426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.200 [INFO][5438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" HandleID="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.200 [INFO][5438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" HandleID="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-5fd64d3fe1", "pod":"goldmane-666569f655-frkkk", "timestamp":"2026-01-15 23:51:45.200439445 +0000 UTC"}, Hostname:"ci-4459.2.2-n-5fd64d3fe1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.200 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.200 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.200 [INFO][5438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-5fd64d3fe1' Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.208 [INFO][5438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.213 [INFO][5438] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.219 [INFO][5438] ipam/ipam.go 511: Trying affinity for 192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.221 [INFO][5438] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.223 [INFO][5438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.64/26 host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.223 [INFO][5438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.64/26 handle="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.224 [INFO][5438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.229 [INFO][5438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.64/26 handle="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.238 [INFO][5438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.72/26] block=192.168.109.64/26 handle="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.238 [INFO][5438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.72/26] handle="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" host="ci-4459.2.2-n-5fd64d3fe1" Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.238 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:51:45.263242 containerd[1898]: 2026-01-15 23:51:45.239 [INFO][5438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.72/26] IPv6=[] ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" HandleID="k8s-pod-network.891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Workload="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.264633 containerd[1898]: 2026-01-15 23:51:45.241 [INFO][5426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1e8e2e68-e4b7-4282-8047-eae44af7b067", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"", Pod:"goldmane-666569f655-frkkk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03716128475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:45.264633 containerd[1898]: 2026-01-15 23:51:45.241 [INFO][5426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.72/32] ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.264633 containerd[1898]: 2026-01-15 23:51:45.241 [INFO][5426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03716128475 ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.264633 containerd[1898]: 2026-01-15 23:51:45.245 [INFO][5426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.264633 containerd[1898]: 2026-01-15 23:51:45.246 [INFO][5426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1e8e2e68-e4b7-4282-8047-eae44af7b067", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-5fd64d3fe1", ContainerID:"891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e", Pod:"goldmane-666569f655-frkkk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03716128475", MAC:"62:36:24:94:6f:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:51:45.264633 containerd[1898]: 2026-01-15 23:51:45.260 [INFO][5426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" Namespace="calico-system" Pod="goldmane-666569f655-frkkk" WorkloadEndpoint="ci--4459.2.2--n--5fd64d3fe1-k8s-goldmane--666569f655--frkkk-eth0" Jan 15 23:51:45.320476 containerd[1898]: time="2026-01-15T23:51:45.319538019Z" level=info msg="connecting to shim 891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e" address="unix:///run/containerd/s/33cf3214887284882158620ed34d3ea5ba81ac7dc7442fdb275a9059a16e0518" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:51:45.329877 kubelet[3396]: E0115 23:51:45.329803 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:51:45.351783 systemd[1]: Started cri-containerd-891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e.scope - libcontainer container 891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e. Jan 15 23:51:45.390335 containerd[1898]: time="2026-01-15T23:51:45.390195457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-frkkk,Uid:1e8e2e68-e4b7-4282-8047-eae44af7b067,Namespace:calico-system,Attempt:0,} returns sandbox id \"891cc47dcd294934f315ac87ee7a7eebfffea6c817189420f0c8c049c72e074e\"" Jan 15 23:51:45.393138 containerd[1898]: time="2026-01-15T23:51:45.392881984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:51:45.578616 systemd-networkd[1477]: cali577516e481a: Gained IPv6LL Jan 15 23:51:45.638408 containerd[1898]: time="2026-01-15T23:51:45.638333091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:45.642310 containerd[1898]: time="2026-01-15T23:51:45.642158318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:51:45.642310 containerd[1898]: time="2026-01-15T23:51:45.642160670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:45.643051 kubelet[3396]: E0115 23:51:45.642567 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:51:45.643051 kubelet[3396]: E0115 23:51:45.642858 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:51:45.643051 kubelet[3396]: E0115 23:51:45.643001 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmh86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:45.644510 kubelet[3396]: E0115 23:51:45.644270 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:51:46.329910 kubelet[3396]: E0115 23:51:46.329703 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:51:46.329910 kubelet[3396]: E0115 23:51:46.329758 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:51:46.922658 systemd-networkd[1477]: cali03716128475: Gained IPv6LL Jan 15 23:51:47.332318 kubelet[3396]: E0115 23:51:47.332275 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:51:49.140165 containerd[1898]: time="2026-01-15T23:51:49.140054811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:51:49.394535 containerd[1898]: time="2026-01-15T23:51:49.394384762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:49.398659 containerd[1898]: time="2026-01-15T23:51:49.398597245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:51:49.398759 containerd[1898]: time="2026-01-15T23:51:49.398640302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:49.398968 kubelet[3396]: E0115 23:51:49.398919 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:49.399442 kubelet[3396]: E0115 23:51:49.398975 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:49.399678 kubelet[3396]: E0115 23:51:49.399176 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p282p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-2tvx8_calico-apiserver(9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:49.399885 containerd[1898]: time="2026-01-15T23:51:49.399843157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:51:49.400862 kubelet[3396]: E0115 23:51:49.400813 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:51:49.635030 containerd[1898]: time="2026-01-15T23:51:49.634955140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:49.639245 containerd[1898]: time="2026-01-15T23:51:49.639070732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:51:49.639245 containerd[1898]: time="2026-01-15T23:51:49.639126094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:51:49.639605 kubelet[3396]: E0115 23:51:49.639558 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:51:49.639717 kubelet[3396]: E0115 23:51:49.639702 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:51:49.639904 kubelet[3396]: E0115 23:51:49.639876 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:49.642977 containerd[1898]: time="2026-01-15T23:51:49.642943715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:51:49.924854 containerd[1898]: time="2026-01-15T23:51:49.924802002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:49.928758 containerd[1898]: time="2026-01-15T23:51:49.928675529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:51:49.928758 containerd[1898]: time="2026-01-15T23:51:49.928723675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:51:49.929091 kubelet[3396]: E0115 23:51:49.929053 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:51:49.929279 kubelet[3396]: E0115 23:51:49.929138 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:51:49.929379 kubelet[3396]: E0115 23:51:49.929347 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:49.930821 kubelet[3396]: E0115 23:51:49.930777 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:51:50.144970 containerd[1898]: time="2026-01-15T23:51:50.144024522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:51:50.377906 containerd[1898]: time="2026-01-15T23:51:50.377854805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:50.388913 containerd[1898]: time="2026-01-15T23:51:50.388833538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:51:50.388913 containerd[1898]: time="2026-01-15T23:51:50.388883436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:50.389229 kubelet[3396]: E0115 23:51:50.389174 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:50.389288 kubelet[3396]: E0115 23:51:50.389240 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:50.389401 kubelet[3396]: E0115 23:51:50.389355 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-jl6lj_calico-apiserver(18982541-a4f1-43eb-9f61-620f19f486a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:50.390792 kubelet[3396]: E0115 23:51:50.390744 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:51:51.139603 containerd[1898]: time="2026-01-15T23:51:51.139553829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:51:51.382827 containerd[1898]: time="2026-01-15T23:51:51.382648094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:51.386650 containerd[1898]: time="2026-01-15T23:51:51.386597151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:51:51.386927 containerd[1898]: time="2026-01-15T23:51:51.386633977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:51:51.386997 kubelet[3396]: E0115 23:51:51.386933 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:51:51.386997 kubelet[3396]: E0115 23:51:51.386993 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:51:51.387367 kubelet[3396]: E0115 23:51:51.387102 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2b5df9145bd241b1b43c1fd247d9a698,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:51.391031 containerd[1898]: time="2026-01-15T23:51:51.390857197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:51:51.627976 containerd[1898]: time="2026-01-15T23:51:51.627788842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:51.631953 containerd[1898]: time="2026-01-15T23:51:51.631841480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:51:51.631953 containerd[1898]: time="2026-01-15T23:51:51.631883833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:51:51.632102 kubelet[3396]: E0115 23:51:51.632055 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:51:51.632135 kubelet[3396]: E0115 23:51:51.632104 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:51:51.632261 kubelet[3396]: E0115 23:51:51.632201 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:51.633516 kubelet[3396]: E0115 23:51:51.633353 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:51:58.141918 containerd[1898]: time="2026-01-15T23:51:58.141337377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:51:58.398996 containerd[1898]: time="2026-01-15T23:51:58.398852530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:58.402433 containerd[1898]: time="2026-01-15T23:51:58.402374133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:51:58.402620 containerd[1898]: time="2026-01-15T23:51:58.402392413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:58.403548 kubelet[3396]: E0115 23:51:58.402759 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:51:58.403548 kubelet[3396]: E0115 23:51:58.402813 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:51:58.403548 kubelet[3396]: E0115 23:51:58.402922 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmh86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:58.404499 kubelet[3396]: E0115 23:51:58.404371 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:52:00.141837 containerd[1898]: time="2026-01-15T23:52:00.141791903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:52:00.143885 kubelet[3396]: E0115 23:52:00.143759 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:52:00.143885 kubelet[3396]: E0115 23:52:00.143839 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:52:00.377918 containerd[1898]: time="2026-01-15T23:52:00.377684170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:00.381885 containerd[1898]: time="2026-01-15T23:52:00.381752031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:52:00.381885 containerd[1898]: time="2026-01-15T23:52:00.381760383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:52:00.382242 kubelet[3396]: E0115 23:52:00.382182 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:52:00.382427 kubelet[3396]: E0115 23:52:00.382242 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:52:00.382427 kubelet[3396]: E0115 23:52:00.382363 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6fnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:00.383700 kubelet[3396]: E0115 23:52:00.383648 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:52:02.142215 kubelet[3396]: E0115 23:52:02.141656 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:52:05.140664 kubelet[3396]: E0115 23:52:05.140516 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:52:11.140029 kubelet[3396]: E0115 23:52:11.139951 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:52:11.141059 containerd[1898]: time="2026-01-15T23:52:11.141022940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:52:11.394929 containerd[1898]: time="2026-01-15T23:52:11.394790463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:11.403063 containerd[1898]: time="2026-01-15T23:52:11.402966324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:52:11.403063 containerd[1898]: time="2026-01-15T23:52:11.403023150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:52:11.403512 kubelet[3396]: E0115 23:52:11.403457 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:52:11.403570 kubelet[3396]: E0115 23:52:11.403527 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:52:11.403993 kubelet[3396]: E0115 23:52:11.403707 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:11.407149 containerd[1898]: time="2026-01-15T23:52:11.406907169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:52:11.709631 containerd[1898]: time="2026-01-15T23:52:11.709271617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:11.713713 containerd[1898]: time="2026-01-15T23:52:11.713644142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:52:11.713865 containerd[1898]: time="2026-01-15T23:52:11.713691647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:52:11.714094 kubelet[3396]: E0115 23:52:11.714007 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:52:11.714094 kubelet[3396]: E0115 23:52:11.714061 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:52:11.714502 kubelet[3396]: E0115 23:52:11.714312 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:11.715726 kubelet[3396]: E0115 23:52:11.715673 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:52:13.139726 kubelet[3396]: E0115 23:52:13.139433 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:52:13.140580 containerd[1898]: time="2026-01-15T23:52:13.140358693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:52:13.390458 containerd[1898]: time="2026-01-15T23:52:13.390304739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:13.393890 containerd[1898]: time="2026-01-15T23:52:13.393844989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:52:13.394005 containerd[1898]: time="2026-01-15T23:52:13.393936961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:52:13.394312 kubelet[3396]: E0115 23:52:13.394091 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:52:13.394312 kubelet[3396]: E0115 23:52:13.394160 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:52:13.394583 containerd[1898]: time="2026-01-15T23:52:13.394554049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:52:13.394795 kubelet[3396]: E0115 23:52:13.394378 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-jl6lj_calico-apiserver(18982541-a4f1-43eb-9f61-620f19f486a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:13.395981 kubelet[3396]: E0115 23:52:13.395950 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:52:13.632255 containerd[1898]: time="2026-01-15T23:52:13.632188316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:13.635701 containerd[1898]: time="2026-01-15T23:52:13.635636707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:52:13.635814 containerd[1898]: time="2026-01-15T23:52:13.635744847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:52:13.636400 kubelet[3396]: E0115 23:52:13.635928 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:52:13.636495 kubelet[3396]: E0115 23:52:13.636405 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:52:13.636937 kubelet[3396]: E0115 23:52:13.636886 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p282p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-2tvx8_calico-apiserver(9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:13.638093 kubelet[3396]: E0115 23:52:13.638025 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:52:20.140119 containerd[1898]: time="2026-01-15T23:52:20.139789740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:52:20.399985 containerd[1898]: time="2026-01-15T23:52:20.399758912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:20.674300 containerd[1898]: time="2026-01-15T23:52:20.674001449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:52:20.674300 containerd[1898]: time="2026-01-15T23:52:20.674050739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:52:20.674788 kubelet[3396]: E0115 23:52:20.674252 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:52:20.674788 kubelet[3396]: E0115 23:52:20.674304 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:52:20.674788 kubelet[3396]: E0115 23:52:20.674401 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2b5df9145bd241b1b43c1fd247d9a698,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:20.676381 containerd[1898]: time="2026-01-15T23:52:20.676328751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:52:20.946654 containerd[1898]: time="2026-01-15T23:52:20.946533107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:20.950504 containerd[1898]: time="2026-01-15T23:52:20.950405818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:52:20.950504 containerd[1898]: time="2026-01-15T23:52:20.950454836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:52:20.950796 kubelet[3396]: E0115 23:52:20.950760 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:52:20.950863 kubelet[3396]: E0115 23:52:20.950807 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:52:20.950929 kubelet[3396]: E0115 23:52:20.950901 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:20.952760 kubelet[3396]: E0115 23:52:20.952720 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:52:24.143510 containerd[1898]: time="2026-01-15T23:52:24.142885540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:52:24.145707 kubelet[3396]: E0115 23:52:24.145145 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:52:24.410240 containerd[1898]: time="2026-01-15T23:52:24.409797595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:24.413773 containerd[1898]: time="2026-01-15T23:52:24.413637933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:52:24.413773 containerd[1898]: time="2026-01-15T23:52:24.413715656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:52:24.414092 kubelet[3396]: E0115 23:52:24.413989 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:52:24.414329 kubelet[3396]: E0115 23:52:24.414170 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:52:24.414329 kubelet[3396]: E0115 23:52:24.414291 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6fnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:24.415605 kubelet[3396]: E0115 23:52:24.415566 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:52:25.139237 kubelet[3396]: E0115 23:52:25.138984 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:52:26.140848 containerd[1898]: time="2026-01-15T23:52:26.139687505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:52:26.142100 kubelet[3396]: E0115 23:52:26.142063 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:52:26.399033 containerd[1898]: time="2026-01-15T23:52:26.398650025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:26.403056 containerd[1898]: time="2026-01-15T23:52:26.402939685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:52:26.403056 containerd[1898]: time="2026-01-15T23:52:26.402988711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:52:26.403197 kubelet[3396]: E0115 23:52:26.403136 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:52:26.403197 kubelet[3396]: E0115 23:52:26.403178 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:52:26.403321 kubelet[3396]: E0115 23:52:26.403280 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmh86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:26.404500 kubelet[3396]: E0115 23:52:26.404428 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:52:28.996588 systemd[1]: Started sshd@7-10.200.20.29:22-10.200.16.10:42430.service - OpenSSH per-connection server daemon (10.200.16.10:42430). Jan 15 23:52:29.523011 sshd[5567]: Accepted publickey for core from 10.200.16.10 port 42430 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:29.525332 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:29.530856 systemd-logind[1872]: New session 10 of user core. Jan 15 23:52:29.536009 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 23:52:29.958815 sshd[5570]: Connection closed by 10.200.16.10 port 42430 Jan 15 23:52:29.959097 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:29.962817 systemd[1]: sshd@7-10.200.20.29:22-10.200.16.10:42430.service: Deactivated successfully. Jan 15 23:52:29.966033 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 23:52:29.966837 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. Jan 15 23:52:29.968213 systemd-logind[1872]: Removed session 10. Jan 15 23:52:35.024947 systemd[1]: Started sshd@8-10.200.20.29:22-10.200.16.10:35374.service - OpenSSH per-connection server daemon (10.200.16.10:35374). Jan 15 23:52:35.140956 kubelet[3396]: E0115 23:52:35.140914 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:52:35.142739 kubelet[3396]: E0115 23:52:35.142579 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:52:35.144670 kubelet[3396]: E0115 23:52:35.144231 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:52:35.461585 sshd[5607]: Accepted publickey for core from 10.200.16.10 port 35374 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:35.462462 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:35.467327 systemd-logind[1872]: New session 11 of user core. Jan 15 23:52:35.472663 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 23:52:35.856519 sshd[5610]: Connection closed by 10.200.16.10 port 35374 Jan 15 23:52:35.857302 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:35.863598 systemd[1]: sshd@8-10.200.20.29:22-10.200.16.10:35374.service: Deactivated successfully. Jan 15 23:52:35.867273 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 23:52:35.870949 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. Jan 15 23:52:35.874815 systemd-logind[1872]: Removed session 11. Jan 15 23:52:37.140353 kubelet[3396]: E0115 23:52:37.140261 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:52:37.140353 kubelet[3396]: E0115 23:52:37.140315 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:52:40.140114 kubelet[3396]: E0115 23:52:40.139894 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:52:40.942044 systemd[1]: Started sshd@9-10.200.20.29:22-10.200.16.10:42060.service - OpenSSH per-connection server daemon (10.200.16.10:42060). Jan 15 23:52:41.372827 sshd[5625]: Accepted publickey for core from 10.200.16.10 port 42060 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:41.374041 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:41.379512 systemd-logind[1872]: New session 12 of user core. Jan 15 23:52:41.386726 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 23:52:41.762230 sshd[5628]: Connection closed by 10.200.16.10 port 42060 Jan 15 23:52:41.761603 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:41.765775 systemd[1]: sshd@9-10.200.20.29:22-10.200.16.10:42060.service: Deactivated successfully. Jan 15 23:52:41.769366 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 23:52:41.772099 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. Jan 15 23:52:41.774618 systemd-logind[1872]: Removed session 12. Jan 15 23:52:41.855264 systemd[1]: Started sshd@10-10.200.20.29:22-10.200.16.10:42064.service - OpenSSH per-connection server daemon (10.200.16.10:42064). Jan 15 23:52:42.347933 sshd[5640]: Accepted publickey for core from 10.200.16.10 port 42064 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:42.349551 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:42.353858 systemd-logind[1872]: New session 13 of user core. Jan 15 23:52:42.360788 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 23:52:42.783763 sshd[5643]: Connection closed by 10.200.16.10 port 42064 Jan 15 23:52:42.784563 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:42.788847 systemd[1]: sshd@10-10.200.20.29:22-10.200.16.10:42064.service: Deactivated successfully. Jan 15 23:52:42.793094 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 23:52:42.795703 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. Jan 15 23:52:42.799899 systemd-logind[1872]: Removed session 13. Jan 15 23:52:42.858724 systemd[1]: Started sshd@11-10.200.20.29:22-10.200.16.10:42068.service - OpenSSH per-connection server daemon (10.200.16.10:42068). Jan 15 23:52:43.288716 sshd[5653]: Accepted publickey for core from 10.200.16.10 port 42068 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:43.289914 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:43.296516 systemd-logind[1872]: New session 14 of user core. Jan 15 23:52:43.300852 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 23:52:43.671657 sshd[5656]: Connection closed by 10.200.16.10 port 42068 Jan 15 23:52:43.670686 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:43.676065 systemd[1]: sshd@11-10.200.20.29:22-10.200.16.10:42068.service: Deactivated successfully. Jan 15 23:52:43.677974 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 23:52:43.679200 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. Jan 15 23:52:43.680361 systemd-logind[1872]: Removed session 14. Jan 15 23:52:48.143221 kubelet[3396]: E0115 23:52:48.143167 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:52:48.143750 kubelet[3396]: E0115 23:52:48.143253 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:52:48.750825 systemd[1]: Started sshd@12-10.200.20.29:22-10.200.16.10:42072.service - OpenSSH per-connection server daemon (10.200.16.10:42072). Jan 15 23:52:49.140002 kubelet[3396]: E0115 23:52:49.139694 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:52:49.183470 sshd[5675]: Accepted publickey for core from 10.200.16.10 port 42072 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:49.184564 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:49.192425 systemd-logind[1872]: New session 15 of user core. Jan 15 23:52:49.199664 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 23:52:49.570617 sshd[5678]: Connection closed by 10.200.16.10 port 42072 Jan 15 23:52:49.570984 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:49.575987 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. Jan 15 23:52:49.576465 systemd[1]: sshd@12-10.200.20.29:22-10.200.16.10:42072.service: Deactivated successfully. Jan 15 23:52:49.578479 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 23:52:49.582100 systemd-logind[1872]: Removed session 15. Jan 15 23:52:50.140660 kubelet[3396]: E0115 23:52:50.140577 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:52:52.140052 kubelet[3396]: E0115 23:52:52.139975 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:52:54.658717 systemd[1]: Started sshd@13-10.200.20.29:22-10.200.16.10:45304.service - OpenSSH per-connection server daemon (10.200.16.10:45304). Jan 15 23:52:55.117844 sshd[5697]: Accepted publickey for core from 10.200.16.10 port 45304 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:52:55.118615 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:52:55.123151 systemd-logind[1872]: New session 16 of user core. Jan 15 23:52:55.128682 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 23:52:55.139472 containerd[1898]: time="2026-01-15T23:52:55.139416696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:52:55.362577 containerd[1898]: time="2026-01-15T23:52:55.362526414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:52:55.368736 containerd[1898]: time="2026-01-15T23:52:55.368429072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:52:55.368736 containerd[1898]: time="2026-01-15T23:52:55.368545685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:52:55.368857 kubelet[3396]: E0115 23:52:55.368716 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:52:55.368857 kubelet[3396]: E0115 23:52:55.368771 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:52:55.369107 kubelet[3396]: E0115 23:52:55.368886 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-jl6lj_calico-apiserver(18982541-a4f1-43eb-9f61-620f19f486a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:52:55.370342 kubelet[3396]: E0115 23:52:55.370308 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:52:55.519528 sshd[5700]: Connection closed by 10.200.16.10 port 45304 Jan 15 23:52:55.518653 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jan 15 23:52:55.522529 systemd[1]: sshd@13-10.200.20.29:22-10.200.16.10:45304.service: Deactivated successfully. Jan 15 23:52:55.524356 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 23:52:55.525238 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. Jan 15 23:52:55.526469 systemd-logind[1872]: Removed session 16. Jan 15 23:53:00.598629 systemd[1]: Started sshd@14-10.200.20.29:22-10.200.16.10:54262.service - OpenSSH per-connection server daemon (10.200.16.10:54262). Jan 15 23:53:01.038366 sshd[5712]: Accepted publickey for core from 10.200.16.10 port 54262 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:01.040075 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:01.045749 systemd-logind[1872]: New session 17 of user core. Jan 15 23:53:01.051777 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 23:53:01.434451 sshd[5715]: Connection closed by 10.200.16.10 port 54262 Jan 15 23:53:01.436094 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:01.440463 systemd[1]: sshd@14-10.200.20.29:22-10.200.16.10:54262.service: Deactivated successfully. Jan 15 23:53:01.443947 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 23:53:01.445641 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. Jan 15 23:53:01.447446 systemd-logind[1872]: Removed session 17. Jan 15 23:53:01.516805 systemd[1]: Started sshd@15-10.200.20.29:22-10.200.16.10:54272.service - OpenSSH per-connection server daemon (10.200.16.10:54272). Jan 15 23:53:01.949420 sshd[5752]: Accepted publickey for core from 10.200.16.10 port 54272 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:01.950802 sshd-session[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:01.955145 systemd-logind[1872]: New session 18 of user core. Jan 15 23:53:01.964652 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 23:53:02.143581 containerd[1898]: time="2026-01-15T23:53:02.143454854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:53:02.402157 containerd[1898]: time="2026-01-15T23:53:02.402089599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:02.405793 containerd[1898]: time="2026-01-15T23:53:02.405738096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:53:02.405949 containerd[1898]: time="2026-01-15T23:53:02.405831652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:53:02.406151 kubelet[3396]: E0115 23:53:02.406105 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:53:02.406784 kubelet[3396]: E0115 23:53:02.406160 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:53:02.406784 kubelet[3396]: E0115 23:53:02.406258 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2b5df9145bd241b1b43c1fd247d9a698,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:02.408723 containerd[1898]: time="2026-01-15T23:53:02.408680296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:53:02.459509 sshd[5755]: Connection closed by 10.200.16.10 port 54272 Jan 15 23:53:02.459775 sshd-session[5752]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:02.463474 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. Jan 15 23:53:02.464916 systemd[1]: sshd@15-10.200.20.29:22-10.200.16.10:54272.service: Deactivated successfully. Jan 15 23:53:02.469088 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 23:53:02.471418 systemd-logind[1872]: Removed session 18. Jan 15 23:53:02.546042 systemd[1]: Started sshd@16-10.200.20.29:22-10.200.16.10:54284.service - OpenSSH per-connection server daemon (10.200.16.10:54284). Jan 15 23:53:02.660092 containerd[1898]: time="2026-01-15T23:53:02.659958429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:02.663889 containerd[1898]: time="2026-01-15T23:53:02.663829446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:53:02.664270 containerd[1898]: time="2026-01-15T23:53:02.663930841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:53:02.664306 kubelet[3396]: E0115 23:53:02.664067 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:53:02.664306 kubelet[3396]: E0115 23:53:02.664118 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:53:02.664938 kubelet[3396]: E0115 23:53:02.664891 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cfc4fdc86-ln6hf_calico-system(15c0f7bb-159e-4e16-a598-2453bb733d6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:02.666135 kubelet[3396]: E0115 23:53:02.666086 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:53:03.016911 sshd[5765]: Accepted publickey for core from 10.200.16.10 port 54284 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:03.017962 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:03.022284 systemd-logind[1872]: New session 19 of user core. Jan 15 23:53:03.029632 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 23:53:03.139781 kubelet[3396]: E0115 23:53:03.139454 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:53:03.140796 containerd[1898]: time="2026-01-15T23:53:03.140756060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:53:03.382537 containerd[1898]: time="2026-01-15T23:53:03.382363658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:03.387258 containerd[1898]: time="2026-01-15T23:53:03.386655426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:53:03.387258 containerd[1898]: time="2026-01-15T23:53:03.386700787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:53:03.387640 kubelet[3396]: E0115 23:53:03.387584 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:53:03.387640 kubelet[3396]: E0115 23:53:03.387639 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:53:03.387939 kubelet[3396]: E0115 23:53:03.387738 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:03.391071 containerd[1898]: time="2026-01-15T23:53:03.390958426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:53:03.660799 containerd[1898]: time="2026-01-15T23:53:03.660440907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:03.664331 containerd[1898]: time="2026-01-15T23:53:03.664278506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:53:03.664455 containerd[1898]: time="2026-01-15T23:53:03.664396087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:53:03.664668 kubelet[3396]: E0115 23:53:03.664566 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:53:03.664982 kubelet[3396]: E0115 23:53:03.664689 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:53:03.665304 kubelet[3396]: E0115 23:53:03.665238 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68t72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjz87_calico-system(51d6de9e-9409-4d26-91e2-95ebd2fa7a0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:03.666613 kubelet[3396]: E0115 23:53:03.666555 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:53:03.960645 sshd[5768]: Connection closed by 10.200.16.10 port 54284 Jan 15 23:53:03.960972 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:03.967321 systemd[1]: sshd@16-10.200.20.29:22-10.200.16.10:54284.service: Deactivated successfully. Jan 15 23:53:03.969886 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 23:53:03.971956 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. Jan 15 23:53:03.973696 systemd-logind[1872]: Removed session 19. Jan 15 23:53:04.044859 systemd[1]: Started sshd@17-10.200.20.29:22-10.200.16.10:54286.service - OpenSSH per-connection server daemon (10.200.16.10:54286). Jan 15 23:53:04.536988 sshd[5791]: Accepted publickey for core from 10.200.16.10 port 54286 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:04.538312 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:04.543149 systemd-logind[1872]: New session 20 of user core. Jan 15 23:53:04.557686 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 23:53:05.035870 sshd[5794]: Connection closed by 10.200.16.10 port 54286 Jan 15 23:53:05.036240 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:05.040145 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. Jan 15 23:53:05.042090 systemd[1]: sshd@17-10.200.20.29:22-10.200.16.10:54286.service: Deactivated successfully. Jan 15 23:53:05.045476 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 23:53:05.047904 systemd-logind[1872]: Removed session 20. Jan 15 23:53:05.123121 systemd[1]: Started sshd@18-10.200.20.29:22-10.200.16.10:54302.service - OpenSSH per-connection server daemon (10.200.16.10:54302). Jan 15 23:53:05.141773 containerd[1898]: time="2026-01-15T23:53:05.140953300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:53:05.388386 containerd[1898]: time="2026-01-15T23:53:05.387933792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:05.391878 containerd[1898]: time="2026-01-15T23:53:05.391721910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:53:05.391878 containerd[1898]: time="2026-01-15T23:53:05.391820034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:53:05.392163 kubelet[3396]: E0115 23:53:05.391977 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:53:05.392163 kubelet[3396]: E0115 23:53:05.392038 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:53:05.393130 kubelet[3396]: E0115 23:53:05.393072 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6fnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f78775f8f-kvtpl_calico-system(34651363-b157-4818-abe1-8475b0e41983): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:05.393300 containerd[1898]: time="2026-01-15T23:53:05.393266381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:53:05.394270 kubelet[3396]: E0115 23:53:05.394228 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:53:05.594510 sshd[5804]: Accepted publickey for core from 10.200.16.10 port 54302 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:05.595649 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:05.599956 systemd-logind[1872]: New session 21 of user core. Jan 15 23:53:05.605133 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 23:53:05.646586 containerd[1898]: time="2026-01-15T23:53:05.646442356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:05.650197 containerd[1898]: time="2026-01-15T23:53:05.650153584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:53:05.650295 containerd[1898]: time="2026-01-15T23:53:05.650244555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:53:05.650436 kubelet[3396]: E0115 23:53:05.650396 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:53:05.650482 kubelet[3396]: E0115 23:53:05.650447 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:53:05.650620 kubelet[3396]: E0115 23:53:05.650586 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p282p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f777f44fd-2tvx8_calico-apiserver(9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:05.652634 kubelet[3396]: E0115 23:53:05.652593 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:53:05.987531 sshd[5824]: Connection closed by 10.200.16.10 port 54302 Jan 15 23:53:05.986751 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:05.992218 systemd[1]: sshd@18-10.200.20.29:22-10.200.16.10:54302.service: Deactivated successfully. Jan 15 23:53:05.994141 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 23:53:05.995104 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. Jan 15 23:53:05.996824 systemd-logind[1872]: Removed session 21. Jan 15 23:53:09.139793 kubelet[3396]: E0115 23:53:09.139744 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:53:11.064997 systemd[1]: Started sshd@19-10.200.20.29:22-10.200.16.10:36170.service - OpenSSH per-connection server daemon (10.200.16.10:36170). Jan 15 23:53:11.496940 sshd[5846]: Accepted publickey for core from 10.200.16.10 port 36170 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:11.498068 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:11.502451 systemd-logind[1872]: New session 22 of user core. Jan 15 23:53:11.510676 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 23:53:11.882281 sshd[5849]: Connection closed by 10.200.16.10 port 36170 Jan 15 23:53:11.882947 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:11.886844 systemd[1]: sshd@19-10.200.20.29:22-10.200.16.10:36170.service: Deactivated successfully. Jan 15 23:53:11.891216 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 23:53:11.895529 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. Jan 15 23:53:11.898278 systemd-logind[1872]: Removed session 22. Jan 15 23:53:14.142152 containerd[1898]: time="2026-01-15T23:53:14.141729612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:53:14.403125 containerd[1898]: time="2026-01-15T23:53:14.402989679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:53:14.406946 containerd[1898]: time="2026-01-15T23:53:14.406895606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:53:14.407078 containerd[1898]: time="2026-01-15T23:53:14.406946888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:53:14.407434 kubelet[3396]: E0115 23:53:14.407192 3396 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:53:14.407434 kubelet[3396]: E0115 23:53:14.407250 3396 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:53:14.407434 kubelet[3396]: E0115 23:53:14.407377 3396 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmh86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-frkkk_calico-system(1e8e2e68-e4b7-4282-8047-eae44af7b067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:53:14.408615 kubelet[3396]: E0115 23:53:14.408580 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:53:15.141622 kubelet[3396]: E0115 23:53:15.141559 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:53:16.139973 kubelet[3396]: E0115 23:53:16.139907 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:53:16.963127 systemd[1]: Started sshd@20-10.200.20.29:22-10.200.16.10:36180.service - OpenSSH per-connection server daemon (10.200.16.10:36180). Jan 15 23:53:17.398813 sshd[5861]: Accepted publickey for core from 10.200.16.10 port 36180 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:17.400039 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:17.404208 systemd-logind[1872]: New session 23 of user core. Jan 15 23:53:17.408634 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 23:53:17.754224 sshd[5864]: Connection closed by 10.200.16.10 port 36180 Jan 15 23:53:17.754123 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:17.757770 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. Jan 15 23:53:17.758346 systemd[1]: sshd@20-10.200.20.29:22-10.200.16.10:36180.service: Deactivated successfully. Jan 15 23:53:17.761132 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 23:53:17.763659 systemd-logind[1872]: Removed session 23. Jan 15 23:53:18.141975 kubelet[3396]: E0115 23:53:18.141844 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:53:18.143392 kubelet[3396]: E0115 23:53:18.143357 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:53:22.141133 kubelet[3396]: E0115 23:53:22.141084 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0" Jan 15 23:53:22.833474 systemd[1]: Started sshd@21-10.200.20.29:22-10.200.16.10:54488.service - OpenSSH per-connection server daemon (10.200.16.10:54488). Jan 15 23:53:23.261394 sshd[5878]: Accepted publickey for core from 10.200.16.10 port 54488 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:23.262637 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:23.271154 systemd-logind[1872]: New session 24 of user core. Jan 15 23:53:23.274681 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 23:53:23.637128 sshd[5881]: Connection closed by 10.200.16.10 port 54488 Jan 15 23:53:23.637932 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:23.642178 systemd[1]: sshd@21-10.200.20.29:22-10.200.16.10:54488.service: Deactivated successfully. Jan 15 23:53:23.643878 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 23:53:23.645026 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. Jan 15 23:53:23.645919 systemd-logind[1872]: Removed session 24. Jan 15 23:53:27.140561 kubelet[3396]: E0115 23:53:27.140491 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-frkkk" podUID="1e8e2e68-e4b7-4282-8047-eae44af7b067" Jan 15 23:53:27.142182 kubelet[3396]: E0115 23:53:27.142114 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cfc4fdc86-ln6hf" podUID="15c0f7bb-159e-4e16-a598-2453bb733d6e" Jan 15 23:53:28.725069 systemd[1]: Started sshd@22-10.200.20.29:22-10.200.16.10:54502.service - OpenSSH per-connection server daemon (10.200.16.10:54502). Jan 15 23:53:29.154691 sshd[5893]: Accepted publickey for core from 10.200.16.10 port 54502 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:29.156212 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:29.160328 systemd-logind[1872]: New session 25 of user core. Jan 15 23:53:29.167659 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 23:53:29.515580 sshd[5896]: Connection closed by 10.200.16.10 port 54502 Jan 15 23:53:29.515083 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:29.517951 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. Jan 15 23:53:29.518629 systemd[1]: sshd@22-10.200.20.29:22-10.200.16.10:54502.service: Deactivated successfully. Jan 15 23:53:29.521036 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 23:53:29.523989 systemd-logind[1872]: Removed session 25. Jan 15 23:53:30.140854 kubelet[3396]: E0115 23:53:30.140663 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f78775f8f-kvtpl" podUID="34651363-b157-4818-abe1-8475b0e41983" Jan 15 23:53:31.139896 kubelet[3396]: E0115 23:53:31.139851 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-2tvx8" podUID="9074b3a7-d0a0-4425-abc1-2be2e7ba0c1e" Jan 15 23:53:32.143761 kubelet[3396]: E0115 23:53:32.143065 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjz87" podUID="51d6de9e-9409-4d26-91e2-95ebd2fa7a0f" Jan 15 23:53:34.611132 systemd[1]: Started sshd@23-10.200.20.29:22-10.200.16.10:40064.service - OpenSSH per-connection server daemon (10.200.16.10:40064). Jan 15 23:53:35.104002 sshd[5932]: Accepted publickey for core from 10.200.16.10 port 40064 ssh2: RSA SHA256:57mkOAJm6OW0lyqO6BOSdUs94L4P7b5nWfitFkdMZA8 Jan 15 23:53:35.105474 sshd-session[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:53:35.111481 systemd-logind[1872]: New session 26 of user core. Jan 15 23:53:35.118728 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 23:53:35.504022 sshd[5935]: Connection closed by 10.200.16.10 port 40064 Jan 15 23:53:35.504637 sshd-session[5932]: pam_unix(sshd:session): session closed for user core Jan 15 23:53:35.508438 systemd[1]: sshd@23-10.200.20.29:22-10.200.16.10:40064.service: Deactivated successfully. Jan 15 23:53:35.510254 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 23:53:35.510985 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. Jan 15 23:53:35.512384 systemd-logind[1872]: Removed session 26. Jan 15 23:53:37.139809 kubelet[3396]: E0115 23:53:37.139507 3396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f777f44fd-jl6lj" podUID="18982541-a4f1-43eb-9f61-620f19f486a0"