Nov 5 23:40:54.046894 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Nov 5 23:40:54.046911 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Nov 5 22:12:41 -00 2025 Nov 5 23:40:54.046918 kernel: KASLR enabled Nov 5 23:40:54.046922 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 5 23:40:54.046925 kernel: printk: legacy bootconsole [pl11] enabled Nov 5 23:40:54.046930 kernel: efi: EFI v2.7 by EDK II Nov 5 23:40:54.046935 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Nov 5 23:40:54.046939 kernel: random: crng init done Nov 5 23:40:54.046943 kernel: secureboot: Secure boot disabled Nov 5 23:40:54.046947 kernel: ACPI: Early table checksum verification disabled Nov 5 23:40:54.046951 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Nov 5 23:40:54.046955 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046959 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046963 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 5 23:40:54.046969 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046973 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046977 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046981 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046985 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046991 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.046995 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 5 23:40:54.046999 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 23:40:54.047003 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 5 23:40:54.047007 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 23:40:54.047012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 5 23:40:54.047016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Nov 5 23:40:54.047020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Nov 5 23:40:54.047024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 5 23:40:54.047028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 5 23:40:54.047032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 5 23:40:54.047037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 5 23:40:54.047042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 5 23:40:54.047046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 5 23:40:54.047050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 5 23:40:54.047054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 5 23:40:54.047058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 5 23:40:54.047062 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Nov 5 23:40:54.047066 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Nov 5 23:40:54.047070 kernel: Zone ranges: Nov 5 23:40:54.047075 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 5 23:40:54.047082 kernel: DMA32 empty Nov 5 23:40:54.047086 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 5 23:40:54.047090 kernel: Device empty Nov 5 23:40:54.047095 kernel: Movable zone start for each node Nov 5 23:40:54.047099 kernel: Early memory node ranges Nov 5 23:40:54.047103 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 5 23:40:54.047109 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Nov 5 23:40:54.047113 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Nov 5 23:40:54.047117 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Nov 5 23:40:54.047122 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Nov 5 23:40:54.047126 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Nov 5 23:40:54.047130 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 5 23:40:54.047134 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 5 23:40:54.047139 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 5 23:40:54.047143 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Nov 5 23:40:54.047147 kernel: psci: probing for conduit method from ACPI. Nov 5 23:40:54.047152 kernel: psci: PSCIv1.3 detected in firmware. Nov 5 23:40:54.047156 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 23:40:54.047161 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 5 23:40:54.047166 kernel: psci: SMC Calling Convention v1.4 Nov 5 23:40:54.047170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 5 23:40:54.047174 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 5 23:40:54.047178 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 23:40:54.047183 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 23:40:54.047187 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 5 23:40:54.047191 kernel: Detected PIPT I-cache on CPU0 Nov 5 23:40:54.047196 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Nov 5 23:40:54.047200 kernel: CPU features: detected: GIC system register CPU interface Nov 5 23:40:54.047205 kernel: CPU features: detected: Spectre-v4 Nov 5 23:40:54.047209 kernel: CPU features: detected: Spectre-BHB Nov 5 23:40:54.047214 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 23:40:54.047218 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 23:40:54.047223 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Nov 5 23:40:54.047227 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 23:40:54.047231 kernel: alternatives: applying boot alternatives Nov 5 23:40:54.047236 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:40:54.047241 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 23:40:54.047245 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 23:40:54.047250 kernel: Fallback order for Node 0: 0 Nov 5 23:40:54.047254 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Nov 5 23:40:54.047259 kernel: Policy zone: Normal Nov 5 23:40:54.047264 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 23:40:54.047268 kernel: software IO TLB: area num 2. Nov 5 23:40:54.047272 kernel: software IO TLB: mapped [mem 0x00000000359a0000-0x00000000399a0000] (64MB) Nov 5 23:40:54.047277 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 23:40:54.047281 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 23:40:54.047286 kernel: rcu: RCU event tracing is enabled. Nov 5 23:40:54.047291 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 23:40:54.047295 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 23:40:54.047300 kernel: Tracing variant of Tasks RCU enabled. Nov 5 23:40:54.047304 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 23:40:54.047308 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 23:40:54.047314 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 23:40:54.047318 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 23:40:54.047322 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 23:40:54.047327 kernel: GICv3: 960 SPIs implemented Nov 5 23:40:54.047331 kernel: GICv3: 0 Extended SPIs implemented Nov 5 23:40:54.047335 kernel: Root IRQ handler: gic_handle_irq Nov 5 23:40:54.047340 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 5 23:40:54.047344 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Nov 5 23:40:54.047348 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 5 23:40:54.047353 kernel: ITS: No ITS available, not enabling LPIs Nov 5 23:40:54.047357 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 23:40:54.047362 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Nov 5 23:40:54.047367 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 23:40:54.047371 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Nov 5 23:40:54.047376 kernel: Console: colour dummy device 80x25 Nov 5 23:40:54.047380 kernel: printk: legacy console [tty1] enabled Nov 5 23:40:54.047385 kernel: ACPI: Core revision 20240827 Nov 5 23:40:54.047389 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Nov 5 23:40:54.047394 kernel: pid_max: default: 32768 minimum: 301 Nov 5 23:40:54.047399 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 23:40:54.047403 kernel: landlock: Up and running. Nov 5 23:40:54.047408 kernel: SELinux: Initializing. Nov 5 23:40:54.047413 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:40:54.047417 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:40:54.047422 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Nov 5 23:40:54.047427 kernel: Hyper-V: Host Build 10.0.26102.1109-1-0 Nov 5 23:40:54.047434 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 5 23:40:54.047440 kernel: rcu: Hierarchical SRCU implementation. Nov 5 23:40:54.047445 kernel: rcu: Max phase no-delay instances is 400. Nov 5 23:40:54.047450 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 23:40:54.047454 kernel: Remapping and enabling EFI services. Nov 5 23:40:54.047459 kernel: smp: Bringing up secondary CPUs ... Nov 5 23:40:54.047464 kernel: Detected PIPT I-cache on CPU1 Nov 5 23:40:54.047469 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 5 23:40:54.047474 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Nov 5 23:40:54.047479 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 23:40:54.047484 kernel: SMP: Total of 2 processors activated. Nov 5 23:40:54.047489 kernel: CPU: All CPU(s) started at EL1 Nov 5 23:40:54.047494 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 23:40:54.047499 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 5 23:40:54.047504 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 23:40:54.047509 kernel: CPU features: detected: Common not Private translations Nov 5 23:40:54.047513 kernel: CPU features: detected: CRC32 instructions Nov 5 23:40:54.047518 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Nov 5 23:40:54.047523 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 23:40:54.047528 kernel: CPU features: detected: LSE atomic instructions Nov 5 23:40:54.047532 kernel: CPU features: detected: Privileged Access Never Nov 5 23:40:54.047538 kernel: CPU features: detected: Speculation barrier (SB) Nov 5 23:40:54.047543 kernel: CPU features: detected: TLB range maintenance instructions Nov 5 23:40:54.047547 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 23:40:54.047552 kernel: CPU features: detected: Scalable Vector Extension Nov 5 23:40:54.047557 kernel: alternatives: applying system-wide alternatives Nov 5 23:40:54.047562 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 5 23:40:54.047566 kernel: SVE: maximum available vector length 16 bytes per vector Nov 5 23:40:54.047571 kernel: SVE: default vector length 16 bytes per vector Nov 5 23:40:54.047576 kernel: Memory: 3953468K/4194160K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 219504K reserved, 16384K cma-reserved) Nov 5 23:40:54.047582 kernel: devtmpfs: initialized Nov 5 23:40:54.047587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 23:40:54.047591 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 23:40:54.047596 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 23:40:54.047601 kernel: 0 pages in range for non-PLT usage Nov 5 23:40:54.047606 kernel: 508560 pages in range for PLT usage Nov 5 23:40:54.047610 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 23:40:54.047615 kernel: SMBIOS 3.1.0 present. Nov 5 23:40:54.047620 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Nov 5 23:40:54.047625 kernel: DMI: Memory slots populated: 2/2 Nov 5 23:40:54.049664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 23:40:54.049672 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 23:40:54.049677 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 23:40:54.049682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 23:40:54.049687 kernel: audit: initializing netlink subsys (disabled) Nov 5 23:40:54.049692 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Nov 5 23:40:54.049697 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 23:40:54.049707 kernel: cpuidle: using governor menu Nov 5 23:40:54.049712 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 23:40:54.049716 kernel: ASID allocator initialised with 32768 entries Nov 5 23:40:54.049721 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 23:40:54.049726 kernel: Serial: AMBA PL011 UART driver Nov 5 23:40:54.049731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 23:40:54.049736 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 23:40:54.049740 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 23:40:54.049745 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 23:40:54.049751 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 23:40:54.049756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 23:40:54.049761 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 23:40:54.049766 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 23:40:54.049771 kernel: ACPI: Added _OSI(Module Device) Nov 5 23:40:54.049775 kernel: ACPI: Added _OSI(Processor Device) Nov 5 23:40:54.049780 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 23:40:54.049785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 23:40:54.049789 kernel: ACPI: Interpreter enabled Nov 5 23:40:54.049795 kernel: ACPI: Using GIC for interrupt routing Nov 5 23:40:54.049800 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 5 23:40:54.049805 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 23:40:54.049810 kernel: printk: legacy bootconsole [pl11] disabled Nov 5 23:40:54.049815 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 5 23:40:54.049820 kernel: ACPI: CPU0 has been hot-added Nov 5 23:40:54.049824 kernel: ACPI: CPU1 has been hot-added Nov 5 23:40:54.049829 kernel: iommu: Default domain type: Translated Nov 5 23:40:54.049834 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 23:40:54.049839 kernel: efivars: Registered efivars operations Nov 5 23:40:54.049844 kernel: vgaarb: loaded Nov 5 23:40:54.049849 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 23:40:54.049854 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 23:40:54.049859 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 23:40:54.049864 kernel: pnp: PnP ACPI init Nov 5 23:40:54.049868 kernel: pnp: PnP ACPI: found 0 devices Nov 5 23:40:54.049873 kernel: NET: Registered PF_INET protocol family Nov 5 23:40:54.049878 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 23:40:54.049883 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 23:40:54.049888 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 23:40:54.049893 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 23:40:54.049898 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 23:40:54.049903 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 23:40:54.049908 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:40:54.049912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:40:54.049917 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 23:40:54.049922 kernel: PCI: CLS 0 bytes, default 64 Nov 5 23:40:54.049927 kernel: kvm [1]: HYP mode not available Nov 5 23:40:54.049932 kernel: Initialise system trusted keyrings Nov 5 23:40:54.049937 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 23:40:54.049942 kernel: Key type asymmetric registered Nov 5 23:40:54.049947 kernel: Asymmetric key parser 'x509' registered Nov 5 23:40:54.049951 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 23:40:54.049956 kernel: io scheduler mq-deadline registered Nov 5 23:40:54.049961 kernel: io scheduler kyber registered Nov 5 23:40:54.049966 kernel: io scheduler bfq registered Nov 5 23:40:54.049970 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 23:40:54.049976 kernel: thunder_xcv, ver 1.0 Nov 5 23:40:54.049981 kernel: thunder_bgx, ver 1.0 Nov 5 23:40:54.049986 kernel: nicpf, ver 1.0 Nov 5 23:40:54.049990 kernel: nicvf, ver 1.0 Nov 5 23:40:54.050110 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 23:40:54.050160 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T23:40:53 UTC (1762386053) Nov 5 23:40:54.050166 kernel: efifb: probing for efifb Nov 5 23:40:54.050172 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 5 23:40:54.050177 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 5 23:40:54.050182 kernel: efifb: scrolling: redraw Nov 5 23:40:54.050187 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 23:40:54.050192 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 23:40:54.050196 kernel: fb0: EFI VGA frame buffer device Nov 5 23:40:54.050201 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 5 23:40:54.050206 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 23:40:54.050211 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 23:40:54.050217 kernel: watchdog: NMI not fully supported Nov 5 23:40:54.050222 kernel: watchdog: Hard watchdog permanently disabled Nov 5 23:40:54.050226 kernel: NET: Registered PF_INET6 protocol family Nov 5 23:40:54.050231 kernel: Segment Routing with IPv6 Nov 5 23:40:54.050236 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 23:40:54.050241 kernel: NET: Registered PF_PACKET protocol family Nov 5 23:40:54.050245 kernel: Key type dns_resolver registered Nov 5 23:40:54.050250 kernel: registered taskstats version 1 Nov 5 23:40:54.050255 kernel: Loading compiled-in X.509 certificates Nov 5 23:40:54.050260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9d5732f5af196e4cfd06fc38e62e061c2a702dfd' Nov 5 23:40:54.050265 kernel: Demotion targets for Node 0: null Nov 5 23:40:54.050270 kernel: Key type .fscrypt registered Nov 5 23:40:54.050275 kernel: Key type fscrypt-provisioning registered Nov 5 23:40:54.050279 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 23:40:54.050284 kernel: ima: Allocated hash algorithm: sha1 Nov 5 23:40:54.050289 kernel: ima: No architecture policies found Nov 5 23:40:54.050294 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 23:40:54.050298 kernel: clk: Disabling unused clocks Nov 5 23:40:54.050303 kernel: PM: genpd: Disabling unused power domains Nov 5 23:40:54.050309 kernel: Warning: unable to open an initial console. Nov 5 23:40:54.050314 kernel: Freeing unused kernel memory: 38976K Nov 5 23:40:54.050318 kernel: Run /init as init process Nov 5 23:40:54.050323 kernel: with arguments: Nov 5 23:40:54.050328 kernel: /init Nov 5 23:40:54.050333 kernel: with environment: Nov 5 23:40:54.050337 kernel: HOME=/ Nov 5 23:40:54.050342 kernel: TERM=linux Nov 5 23:40:54.050348 systemd[1]: Successfully made /usr/ read-only. Nov 5 23:40:54.050355 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:40:54.050361 systemd[1]: Detected virtualization microsoft. Nov 5 23:40:54.050366 systemd[1]: Detected architecture arm64. Nov 5 23:40:54.050371 systemd[1]: Running in initrd. Nov 5 23:40:54.050376 systemd[1]: No hostname configured, using default hostname. Nov 5 23:40:54.050381 systemd[1]: Hostname set to . Nov 5 23:40:54.050386 systemd[1]: Initializing machine ID from random generator. Nov 5 23:40:54.050392 systemd[1]: Queued start job for default target initrd.target. Nov 5 23:40:54.050397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:40:54.050403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:40:54.050408 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 23:40:54.050414 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:40:54.050419 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 23:40:54.050425 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 23:40:54.050431 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 5 23:40:54.050437 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 5 23:40:54.050442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:40:54.050447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:40:54.050453 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:40:54.050458 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:40:54.050463 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:40:54.050468 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:40:54.050474 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:40:54.050479 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:40:54.050485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 23:40:54.050490 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 23:40:54.050495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:40:54.050500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:40:54.050505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:40:54.050511 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:40:54.050516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 23:40:54.050522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:40:54.050527 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 23:40:54.050532 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 23:40:54.050538 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 23:40:54.050543 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:40:54.050548 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:40:54.050565 systemd-journald[225]: Collecting audit messages is disabled. Nov 5 23:40:54.050580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:40:54.050586 systemd-journald[225]: Journal started Nov 5 23:40:54.050601 systemd-journald[225]: Runtime Journal (/run/log/journal/dbd3ff0419e743189e2bf3f7d52eb622) is 8M, max 78.3M, 70.3M free. Nov 5 23:40:54.053887 systemd-modules-load[227]: Inserted module 'overlay' Nov 5 23:40:54.068326 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:40:54.068966 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 23:40:54.082710 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 23:40:54.082729 kernel: Bridge firewalling registered Nov 5 23:40:54.081772 systemd-modules-load[227]: Inserted module 'br_netfilter' Nov 5 23:40:54.086084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:40:54.099864 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 23:40:54.103336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:40:54.116120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:40:54.122886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 23:40:54.137309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:40:54.150741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 23:40:54.156816 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:40:54.163765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:40:54.186891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:40:54.192692 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 23:40:54.195168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:40:54.218448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:40:54.224014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:40:54.239500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:40:54.247308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 23:40:54.269779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:40:54.285839 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:40:54.318174 systemd-resolved[266]: Positive Trust Anchors: Nov 5 23:40:54.318188 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:40:54.318208 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:40:54.320118 systemd-resolved[266]: Defaulting to hostname 'linux'. Nov 5 23:40:54.321854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:40:54.327466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:40:54.413656 kernel: SCSI subsystem initialized Nov 5 23:40:54.419643 kernel: Loading iSCSI transport class v2.0-870. Nov 5 23:40:54.426666 kernel: iscsi: registered transport (tcp) Nov 5 23:40:54.439752 kernel: iscsi: registered transport (qla4xxx) Nov 5 23:40:54.439766 kernel: QLogic iSCSI HBA Driver Nov 5 23:40:54.453408 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:40:54.472778 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:40:54.479120 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:40:54.528124 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 23:40:54.534759 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 23:40:54.606649 kernel: raid6: neonx8 gen() 18552 MB/s Nov 5 23:40:54.624636 kernel: raid6: neonx4 gen() 18541 MB/s Nov 5 23:40:54.643637 kernel: raid6: neonx2 gen() 17085 MB/s Nov 5 23:40:54.662635 kernel: raid6: neonx1 gen() 15046 MB/s Nov 5 23:40:54.682636 kernel: raid6: int64x8 gen() 10546 MB/s Nov 5 23:40:54.701717 kernel: raid6: int64x4 gen() 10609 MB/s Nov 5 23:40:54.720718 kernel: raid6: int64x2 gen() 8985 MB/s Nov 5 23:40:54.742761 kernel: raid6: int64x1 gen() 7013 MB/s Nov 5 23:40:54.742811 kernel: raid6: using algorithm neonx8 gen() 18552 MB/s Nov 5 23:40:54.764867 kernel: raid6: .... xor() 14890 MB/s, rmw enabled Nov 5 23:40:54.764876 kernel: raid6: using neon recovery algorithm Nov 5 23:40:54.773052 kernel: xor: measuring software checksum speed Nov 5 23:40:54.773062 kernel: 8regs : 28567 MB/sec Nov 5 23:40:54.775448 kernel: 32regs : 28777 MB/sec Nov 5 23:40:54.780680 kernel: arm64_neon : 34752 MB/sec Nov 5 23:40:54.780688 kernel: xor: using function: arm64_neon (34752 MB/sec) Nov 5 23:40:54.818653 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 23:40:54.824684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:40:54.833069 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:40:54.859602 systemd-udevd[474]: Using default interface naming scheme 'v255'. Nov 5 23:40:54.863686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:40:54.869375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 23:40:54.893976 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Nov 5 23:40:54.913186 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:40:54.918964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:40:54.964066 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:40:54.975499 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 23:40:55.036461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:40:55.041203 kernel: hv_vmbus: Vmbus version:5.3 Nov 5 23:40:55.039332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:40:55.055087 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:40:55.083074 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 5 23:40:55.083093 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 5 23:40:55.083100 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 5 23:40:55.083107 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 5 23:40:55.072133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:40:55.092640 kernel: PTP clock support registered Nov 5 23:40:55.105185 kernel: hv_utils: Registering HyperV Utility Driver Nov 5 23:40:55.105213 kernel: hv_vmbus: registering driver hv_utils Nov 5 23:40:55.111039 kernel: hv_utils: Heartbeat IC version 3.0 Nov 5 23:40:55.111071 kernel: hv_utils: Shutdown IC version 3.2 Nov 5 23:40:55.111079 kernel: hv_utils: TimeSync IC version 4.0 Nov 5 23:40:54.616931 systemd-resolved[266]: Clock change detected. Flushing caches. Nov 5 23:40:54.661880 kernel: hv_vmbus: registering driver hv_storvsc Nov 5 23:40:54.661898 kernel: hv_vmbus: registering driver hid_hyperv Nov 5 23:40:54.661904 kernel: scsi host1: storvsc_host_t Nov 5 23:40:54.662003 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 5 23:40:54.662009 kernel: hv_vmbus: registering driver hv_netvsc Nov 5 23:40:54.662016 kernel: scsi host0: storvsc_host_t Nov 5 23:40:54.662087 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 5 23:40:54.662101 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 5 23:40:54.662158 systemd-journald[225]: Time jumped backwards, rotating. Nov 5 23:40:54.662183 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 5 23:40:54.672211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:40:54.695119 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 5 23:40:54.695265 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 5 23:40:54.698363 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 5 23:40:54.698583 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 5 23:40:54.705585 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 5 23:40:54.705698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 5 23:40:54.718592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 5 23:40:54.733546 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 23:40:54.733588 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 5 23:40:54.741221 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 5 23:40:54.741415 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 23:40:54.742590 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 5 23:40:54.753585 kernel: hv_netvsc 00224876-cad9-0022-4876-cad900224876 eth0: VF slot 1 added Nov 5 23:40:54.762589 kernel: hv_vmbus: registering driver hv_pci Nov 5 23:40:54.779579 kernel: hv_pci 5f2face4-7de1-4c4a-a04d-4d58dfc6dc46: PCI VMBus probing: Using version 0x10004 Nov 5 23:40:54.779724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#229 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 23:40:54.779794 kernel: hv_pci 5f2face4-7de1-4c4a-a04d-4d58dfc6dc46: PCI host bridge to bus 7de1:00 Nov 5 23:40:54.784617 kernel: pci_bus 7de1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 5 23:40:54.789621 kernel: pci_bus 7de1:00: No busn resource found for root bus, will use [bus 00-ff] Nov 5 23:40:54.802763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 23:40:54.802870 kernel: pci 7de1:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Nov 5 23:40:54.809618 kernel: pci 7de1:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 5 23:40:54.823612 kernel: pci 7de1:00:02.0: enabling Extended Tags Nov 5 23:40:54.839681 kernel: pci 7de1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7de1:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Nov 5 23:40:54.848926 kernel: pci_bus 7de1:00: busn_res: [bus 00-ff] end is updated to 00 Nov 5 23:40:54.849067 kernel: pci 7de1:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Nov 5 23:40:54.907908 kernel: mlx5_core 7de1:00:02.0: enabling device (0000 -> 0002) Nov 5 23:40:54.916541 kernel: mlx5_core 7de1:00:02.0: PTM is not supported by PCIe Nov 5 23:40:54.916664 kernel: mlx5_core 7de1:00:02.0: firmware version: 16.30.5006 Nov 5 23:40:55.087877 kernel: hv_netvsc 00224876-cad9-0022-4876-cad900224876 eth0: VF registering: eth1 Nov 5 23:40:55.088075 kernel: mlx5_core 7de1:00:02.0 eth1: joined to eth0 Nov 5 23:40:55.093302 kernel: mlx5_core 7de1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 5 23:40:55.102587 kernel: mlx5_core 7de1:00:02.0 enP32225s1: renamed from eth1 Nov 5 23:40:55.345615 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 5 23:40:55.372236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 5 23:40:55.415645 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 5 23:40:55.426007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 5 23:40:55.431303 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 5 23:40:55.444844 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 23:40:55.477400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 23:40:55.487094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 5 23:40:55.484740 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:40:55.494252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:40:55.505107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:40:55.515463 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 23:40:55.534914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 23:40:55.551784 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:40:56.546221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 5 23:40:56.557645 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 23:40:56.557841 disk-uuid[649]: The operation has completed successfully. Nov 5 23:40:56.620004 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 23:40:56.620098 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 23:40:56.653052 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 5 23:40:56.673867 sh[820]: Success Nov 5 23:40:56.707624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 23:40:56.707684 kernel: device-mapper: uevent: version 1.0.3 Nov 5 23:40:56.712631 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 23:40:56.721585 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 23:40:57.056589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 5 23:40:57.064684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 5 23:40:57.081628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 5 23:40:57.098751 kernel: BTRFS: device fsid 223300c7-37a4-4131-896a-4d331c3aa134 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (837) Nov 5 23:40:57.098779 kernel: BTRFS info (device dm-0): first mount of filesystem 223300c7-37a4-4131-896a-4d331c3aa134 Nov 5 23:40:57.107974 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:40:57.449547 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 23:40:57.449634 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 23:40:57.485598 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 5 23:40:57.489753 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:40:57.498187 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 23:40:57.498854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 23:40:57.521142 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 23:40:57.558764 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (879) Nov 5 23:40:57.558812 kernel: BTRFS info (device sda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:40:57.563491 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:40:57.608628 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:40:57.620225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:40:57.643057 kernel: BTRFS info (device sda6): turning on async discard Nov 5 23:40:57.643075 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 23:40:57.643082 kernel: BTRFS info (device sda6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:40:57.645717 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 23:40:57.651840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 23:40:57.678144 systemd-networkd[1003]: lo: Link UP Nov 5 23:40:57.678155 systemd-networkd[1003]: lo: Gained carrier Nov 5 23:40:57.678907 systemd-networkd[1003]: Enumeration completed Nov 5 23:40:57.681274 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:40:57.686298 systemd-networkd[1003]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:40:57.686301 systemd-networkd[1003]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:40:57.686753 systemd[1]: Reached target network.target - Network. Nov 5 23:40:57.759590 kernel: mlx5_core 7de1:00:02.0 enP32225s1: Link up Nov 5 23:40:57.793073 systemd-networkd[1003]: enP32225s1: Link UP Nov 5 23:40:57.796734 kernel: hv_netvsc 00224876-cad9-0022-4876-cad900224876 eth0: Data path switched to VF: enP32225s1 Nov 5 23:40:57.793136 systemd-networkd[1003]: eth0: Link UP Nov 5 23:40:57.793223 systemd-networkd[1003]: eth0: Gained carrier Nov 5 23:40:57.793236 systemd-networkd[1003]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:40:57.802713 systemd-networkd[1003]: enP32225s1: Gained carrier Nov 5 23:40:57.830604 systemd-networkd[1003]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 5 23:40:58.947685 ignition[1008]: Ignition 2.22.0 Nov 5 23:40:58.947697 ignition[1008]: Stage: fetch-offline Nov 5 23:40:58.947789 ignition[1008]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:40:58.951766 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:40:58.947795 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:40:58.959294 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 23:40:58.947867 ignition[1008]: parsed url from cmdline: "" Nov 5 23:40:58.947869 ignition[1008]: no config URL provided Nov 5 23:40:58.947873 ignition[1008]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 23:40:58.947877 ignition[1008]: no config at "/usr/lib/ignition/user.ign" Nov 5 23:40:58.947880 ignition[1008]: failed to fetch config: resource requires networking Nov 5 23:40:58.950651 ignition[1008]: Ignition finished successfully Nov 5 23:40:58.988587 ignition[1020]: Ignition 2.22.0 Nov 5 23:40:58.988591 ignition[1020]: Stage: fetch Nov 5 23:40:58.988766 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:40:58.988773 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:40:58.988826 ignition[1020]: parsed url from cmdline: "" Nov 5 23:40:58.988828 ignition[1020]: no config URL provided Nov 5 23:40:58.988831 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 23:40:58.988836 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Nov 5 23:40:58.988850 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 5 23:40:59.051149 ignition[1020]: GET result: OK Nov 5 23:40:59.051209 ignition[1020]: config has been read from IMDS userdata Nov 5 23:40:59.051228 ignition[1020]: parsing config with SHA512: c9af328825bf266ecd963c28cfcccd0f8c5857708bfb620474aadc9d35ec3793b721e1d51a6f5c1712f99210b9d3a8edfde1c587f6682f2cf36393479309f4cc Nov 5 23:40:59.054090 unknown[1020]: fetched base config from "system" Nov 5 23:40:59.054556 ignition[1020]: fetch: fetch complete Nov 5 23:40:59.054103 unknown[1020]: fetched base config from "system" Nov 5 23:40:59.054560 ignition[1020]: fetch: fetch passed Nov 5 23:40:59.054107 unknown[1020]: fetched user config from "azure" Nov 5 23:40:59.054616 ignition[1020]: Ignition finished successfully Nov 5 23:40:59.057995 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 23:40:59.065245 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 23:40:59.105119 ignition[1027]: Ignition 2.22.0 Nov 5 23:40:59.107425 ignition[1027]: Stage: kargs Nov 5 23:40:59.107624 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:40:59.112612 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 23:40:59.107632 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:40:59.118119 systemd-networkd[1003]: eth0: Gained IPv6LL Nov 5 23:40:59.108153 ignition[1027]: kargs: kargs passed Nov 5 23:40:59.118900 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 23:40:59.108192 ignition[1027]: Ignition finished successfully Nov 5 23:40:59.148391 ignition[1033]: Ignition 2.22.0 Nov 5 23:40:59.148403 ignition[1033]: Stage: disks Nov 5 23:40:59.148588 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:40:59.151905 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 23:40:59.148595 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:40:59.157195 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 23:40:59.149140 ignition[1033]: disks: disks passed Nov 5 23:40:59.164844 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 23:40:59.149184 ignition[1033]: Ignition finished successfully Nov 5 23:40:59.173082 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:40:59.181477 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:40:59.189463 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:40:59.196832 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 23:40:59.275674 systemd-fsck[1041]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 5 23:40:59.282189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 23:40:59.288886 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 23:41:01.459591 kernel: EXT4-fs (sda9): mounted filesystem de3d89fd-ab21-4d05-b3c1-f0d3e7ce9725 r/w with ordered data mode. Quota mode: none. Nov 5 23:41:01.459756 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 23:41:01.463567 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 23:41:01.500380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:41:01.517854 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 23:41:01.533711 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 23:41:01.559814 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1055) Nov 5 23:41:01.559835 kernel: BTRFS info (device sda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:41:01.559843 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:41:01.539758 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 23:41:01.539786 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:41:01.550910 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 23:41:01.596438 kernel: BTRFS info (device sda6): turning on async discard Nov 5 23:41:01.596456 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 23:41:01.575047 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 23:41:01.593589 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:41:02.063426 coreos-metadata[1057]: Nov 05 23:41:02.063 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 5 23:41:02.071617 coreos-metadata[1057]: Nov 05 23:41:02.071 INFO Fetch successful Nov 5 23:41:02.075689 coreos-metadata[1057]: Nov 05 23:41:02.075 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 5 23:41:02.091155 coreos-metadata[1057]: Nov 05 23:41:02.091 INFO Fetch successful Nov 5 23:41:02.104912 coreos-metadata[1057]: Nov 05 23:41:02.104 INFO wrote hostname ci-4459.1.0-n-7f88f0cba0 to /sysroot/etc/hostname Nov 5 23:41:02.111630 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 23:41:02.432590 initrd-setup-root[1085]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 23:41:02.473596 initrd-setup-root[1092]: cut: /sysroot/etc/group: No such file or directory Nov 5 23:41:02.494432 initrd-setup-root[1099]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 23:41:02.527588 initrd-setup-root[1106]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 23:41:03.769248 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 23:41:03.774599 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 23:41:03.790238 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 23:41:03.802012 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 23:41:03.811585 kernel: BTRFS info (device sda6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:41:03.831343 ignition[1175]: INFO : Ignition 2.22.0 Nov 5 23:41:03.835247 ignition[1175]: INFO : Stage: mount Nov 5 23:41:03.835247 ignition[1175]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:41:03.835247 ignition[1175]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:41:03.835247 ignition[1175]: INFO : mount: mount passed Nov 5 23:41:03.835247 ignition[1175]: INFO : Ignition finished successfully Nov 5 23:41:03.837074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 23:41:03.842105 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 23:41:03.851890 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 23:41:03.873671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:41:03.896691 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Nov 5 23:41:03.906037 kernel: BTRFS info (device sda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:41:03.906055 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:41:03.915270 kernel: BTRFS info (device sda6): turning on async discard Nov 5 23:41:03.915287 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 23:41:03.917329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:41:03.941589 ignition[1205]: INFO : Ignition 2.22.0 Nov 5 23:41:03.941589 ignition[1205]: INFO : Stage: files Nov 5 23:41:03.941589 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:41:03.941589 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:41:03.956029 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Nov 5 23:41:03.987467 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 23:41:03.987467 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 23:41:04.049555 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 23:41:04.055132 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 23:41:04.055132 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 23:41:04.049935 unknown[1205]: wrote ssh authorized keys file for user: core Nov 5 23:41:04.093725 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:41:04.101215 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 23:41:04.125939 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 23:41:04.210062 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:41:04.217823 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:41:04.272246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:41:04.272246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:41:04.272246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:41:04.272246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:41:04.272246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:41:04.272246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 5 23:41:04.831872 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 23:41:05.812871 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:41:05.812871 ignition[1205]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 23:41:05.845169 ignition[1205]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:41:05.858230 ignition[1205]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:41:05.858230 ignition[1205]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 23:41:05.858230 ignition[1205]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 23:41:05.888124 ignition[1205]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 23:41:05.888124 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:41:05.888124 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:41:05.888124 ignition[1205]: INFO : files: files passed Nov 5 23:41:05.888124 ignition[1205]: INFO : Ignition finished successfully Nov 5 23:41:05.867512 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 23:41:05.877416 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 23:41:05.904252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 23:41:05.922700 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 23:41:05.922780 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 23:41:05.954558 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:41:05.954558 initrd-setup-root-after-ignition[1233]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:41:05.972924 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:41:05.955472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:41:05.967022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 23:41:05.978711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 23:41:06.029966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 23:41:06.030075 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 23:41:06.039271 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 23:41:06.047904 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 23:41:06.055849 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 23:41:06.057669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 23:41:06.091959 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:41:06.098616 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 23:41:06.123103 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:41:06.128021 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:41:06.136657 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 23:41:06.145197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 23:41:06.145314 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:41:06.156887 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 23:41:06.161006 systemd[1]: Stopped target basic.target - Basic System. Nov 5 23:41:06.168987 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 23:41:06.177393 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:41:06.185218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 23:41:06.193804 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:41:06.202629 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 23:41:06.210948 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:41:06.220058 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 23:41:06.227817 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 23:41:06.236583 systemd[1]: Stopped target swap.target - Swaps. Nov 5 23:41:06.243636 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 23:41:06.243748 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:41:06.254775 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:41:06.262653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:41:06.271438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 23:41:06.271501 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:41:06.280425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 23:41:06.280526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 23:41:06.292826 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 23:41:06.292916 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:41:06.298255 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 23:41:06.298331 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 23:41:06.307414 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 23:41:06.307477 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 23:41:06.361645 ignition[1257]: INFO : Ignition 2.22.0 Nov 5 23:41:06.361645 ignition[1257]: INFO : Stage: umount Nov 5 23:41:06.361645 ignition[1257]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:41:06.361645 ignition[1257]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 23:41:06.361645 ignition[1257]: INFO : umount: umount passed Nov 5 23:41:06.361645 ignition[1257]: INFO : Ignition finished successfully Nov 5 23:41:06.316645 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 23:41:06.341149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 23:41:06.360000 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 23:41:06.360118 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:41:06.365079 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 23:41:06.365151 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:41:06.376026 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 23:41:06.377598 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 23:41:06.384555 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 23:41:06.384677 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 23:41:06.393299 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 23:41:06.393337 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 23:41:06.401277 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 23:41:06.401307 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 23:41:06.409470 systemd[1]: Stopped target network.target - Network. Nov 5 23:41:06.417291 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 23:41:06.417340 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:41:06.425778 systemd[1]: Stopped target paths.target - Path Units. Nov 5 23:41:06.433229 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 23:41:06.436590 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:41:06.447972 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 23:41:06.455757 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 23:41:06.463055 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 23:41:06.463091 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:41:06.470986 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 23:41:06.471019 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:41:06.479047 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 23:41:06.479093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 23:41:06.487043 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 23:41:06.487074 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 23:41:06.494871 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 23:41:06.502272 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 23:41:06.515794 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 23:41:06.516269 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 23:41:06.516342 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 23:41:06.525481 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 23:41:06.525563 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 23:41:06.538820 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 5 23:41:06.538987 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 23:41:06.539087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 23:41:06.550327 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 5 23:41:06.716261 kernel: hv_netvsc 00224876-cad9-0022-4876-cad900224876 eth0: Data path switched from VF: enP32225s1 Nov 5 23:41:06.550515 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 23:41:06.550658 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 23:41:06.561917 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 23:41:06.568160 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 23:41:06.568198 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:41:06.576351 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 23:41:06.576415 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 23:41:06.585056 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 23:41:06.598607 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 23:41:06.598667 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:41:06.607845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 23:41:06.607880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:41:06.616762 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 23:41:06.616795 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 23:41:06.621074 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 23:41:06.621103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:41:06.633321 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:41:06.640441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 5 23:41:06.640492 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:41:06.675367 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 23:41:06.675546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:41:06.684293 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 23:41:06.684339 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 23:41:06.692236 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 23:41:06.692265 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:41:06.711638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 23:41:06.711708 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:41:06.720354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 23:41:06.720404 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 23:41:06.731892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 23:41:06.731934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:41:06.749749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 23:41:06.762005 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 23:41:06.762071 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:41:06.775440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 23:41:06.775481 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:41:06.787305 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 23:41:06.787351 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:41:06.796412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 23:41:06.796448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:41:06.801895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:41:06.801928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:41:06.815727 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 5 23:41:06.815769 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 5 23:41:06.815790 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 5 23:41:06.815814 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:41:06.816061 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 23:41:06.816147 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 23:41:06.822900 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 23:41:06.822965 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 23:41:06.832461 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 23:41:06.842297 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 23:41:07.007414 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Nov 5 23:41:06.882473 systemd[1]: Switching root. Nov 5 23:41:07.010122 systemd-journald[225]: Journal stopped Nov 5 23:41:32.530563 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 23:41:32.532585 kernel: SELinux: policy capability open_perms=1 Nov 5 23:41:32.532601 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 23:41:32.532608 kernel: SELinux: policy capability always_check_network=0 Nov 5 23:41:32.532613 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 23:41:32.532622 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 23:41:32.532628 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 23:41:32.532634 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 23:41:32.532639 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 23:41:32.532645 kernel: audit: type=1403 audit(1762386068.593:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 23:41:32.532652 systemd[1]: Successfully loaded SELinux policy in 216.833ms. Nov 5 23:41:32.532660 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.351ms. Nov 5 23:41:32.532667 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:41:32.532673 systemd[1]: Detected virtualization microsoft. Nov 5 23:41:32.532679 systemd[1]: Detected architecture arm64. Nov 5 23:41:32.532687 systemd[1]: Detected first boot. Nov 5 23:41:32.532694 systemd[1]: Hostname set to . Nov 5 23:41:32.532700 systemd[1]: Initializing machine ID from random generator. Nov 5 23:41:32.532706 zram_generator::config[1299]: No configuration found. Nov 5 23:41:32.532713 kernel: NET: Registered PF_VSOCK protocol family Nov 5 23:41:32.532718 systemd[1]: Populated /etc with preset unit settings. Nov 5 23:41:32.532725 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 5 23:41:32.532731 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 23:41:32.532738 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 23:41:32.532743 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 23:41:32.532749 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 23:41:32.532756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 23:41:32.532762 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 23:41:32.532768 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 23:41:32.532774 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 23:41:32.532781 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 23:41:32.532787 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 23:41:32.532793 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 23:41:32.532799 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:41:32.532805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:41:32.532811 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 23:41:32.532818 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 23:41:32.532824 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 23:41:32.532831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:41:32.532837 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 23:41:32.532844 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:41:32.532851 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:41:32.532857 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 23:41:32.532863 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 23:41:32.532869 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 23:41:32.532875 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 23:41:32.532882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:41:32.532888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:41:32.532894 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:41:32.532900 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:41:32.532906 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 23:41:32.532912 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 23:41:32.532920 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 23:41:32.532926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:41:32.532932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:41:32.532938 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:41:32.532944 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 23:41:32.532951 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 23:41:32.532958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 23:41:32.532965 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 23:41:32.532971 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 23:41:32.532977 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 23:41:32.532983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 23:41:32.532990 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 23:41:32.532996 systemd[1]: Reached target machines.target - Containers. Nov 5 23:41:32.533003 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 23:41:32.533009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:41:32.533016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:41:32.533023 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 23:41:32.533029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:41:32.533035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:41:32.533041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:41:32.533047 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 23:41:32.533053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:41:32.533060 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 23:41:32.533066 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 23:41:32.533073 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 23:41:32.533080 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 23:41:32.533087 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 23:41:32.533093 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:41:32.533099 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:41:32.533106 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:41:32.533112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:41:32.533118 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 23:41:32.533125 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 23:41:32.533131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:41:32.533137 kernel: fuse: init (API version 7.41) Nov 5 23:41:32.533143 systemd[1]: verity-setup.service: Deactivated successfully. Nov 5 23:41:32.533149 systemd[1]: Stopped verity-setup.service. Nov 5 23:41:32.533155 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 23:41:32.533162 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 23:41:32.533168 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 23:41:32.533174 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 23:41:32.533181 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 23:41:32.533187 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 23:41:32.533193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:41:32.533199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 23:41:32.533205 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 23:41:32.533211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:41:32.533218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:41:32.533224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:41:32.533232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:41:32.533238 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 23:41:32.533244 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 23:41:32.533250 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 23:41:32.533256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 23:41:32.533263 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 23:41:32.533269 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 23:41:32.533275 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 23:41:32.533282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:41:32.533288 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:41:32.533321 systemd-journald[1379]: Collecting audit messages is disabled. Nov 5 23:41:32.533335 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 23:41:32.533344 systemd-journald[1379]: Journal started Nov 5 23:41:32.533359 systemd-journald[1379]: Runtime Journal (/run/log/journal/e3399337a57f431c817de4e518944c26) is 8M, max 78.3M, 70.3M free. Nov 5 23:41:30.345290 systemd[1]: Queued start job for default target multi-user.target. Nov 5 23:41:30.352067 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 23:41:30.352447 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 23:41:30.352732 systemd[1]: systemd-journald.service: Consumed 2.345s CPU time. Nov 5 23:41:32.538641 kernel: loop: module loaded Nov 5 23:41:32.554089 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:41:32.554787 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:41:32.556619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:41:32.567958 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 23:41:32.567997 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:41:32.573973 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 23:41:32.580448 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 23:41:32.692517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:41:32.825101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 23:41:32.836331 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 23:41:32.840935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:41:32.841697 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 23:41:32.847129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:41:32.849701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 23:41:32.860590 kernel: ACPI: bus type drm_connector registered Nov 5 23:41:32.861110 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:41:32.864609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:41:32.878650 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 23:41:32.908799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:41:33.036608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:41:33.042387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:41:33.175893 systemd-journald[1379]: Time spent on flushing to /var/log/journal/e3399337a57f431c817de4e518944c26 is 1.444734s for 933 entries. Nov 5 23:41:33.175893 systemd-journald[1379]: System Journal (/var/log/journal/e3399337a57f431c817de4e518944c26) is 11.8M, max 2.6G, 2.6G free. Nov 5 23:41:39.566236 systemd-journald[1379]: Received client request to flush runtime journal. Nov 5 23:41:39.566302 kernel: loop0: detected capacity change from 0 to 211168 Nov 5 23:41:39.566322 systemd-journald[1379]: /var/log/journal/e3399337a57f431c817de4e518944c26/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Nov 5 23:41:39.566339 systemd-journald[1379]: Rotating system journal. Nov 5 23:41:39.566358 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 23:41:39.566371 kernel: loop1: detected capacity change from 0 to 119368 Nov 5 23:41:33.587082 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Nov 5 23:41:33.587091 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Nov 5 23:41:33.734919 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:41:34.078797 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:41:34.271957 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 23:41:34.285475 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 23:41:34.297784 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 23:41:34.308708 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 23:41:34.314797 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 23:41:39.568239 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 23:41:39.921001 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 23:41:39.926352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:41:39.945387 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Nov 5 23:41:39.945673 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Nov 5 23:41:39.949873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:41:39.954889 kernel: loop2: detected capacity change from 0 to 100632 Nov 5 23:41:39.968731 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 23:41:39.970637 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 23:41:39.975908 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 23:41:39.982540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:41:40.004484 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Nov 5 23:41:43.426649 kernel: loop3: detected capacity change from 0 to 27936 Nov 5 23:41:45.474157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:41:45.483794 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:41:45.514561 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 23:41:45.796268 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 23:41:45.864594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 23:41:46.096601 kernel: hv_vmbus: registering driver hv_balloon Nov 5 23:41:46.096698 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 5 23:41:46.099937 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 5 23:41:46.100069 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 23:41:46.119643 kernel: hv_vmbus: registering driver hyperv_fb Nov 5 23:41:46.121833 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 5 23:41:46.129039 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 5 23:41:46.132620 kernel: Console: switching to colour dummy device 80x25 Nov 5 23:41:46.137258 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 23:41:46.144455 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 23:41:46.149910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:41:46.165227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:41:46.166040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:41:46.172554 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:41:46.174226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:41:46.259903 systemd-networkd[1479]: lo: Link UP Nov 5 23:41:46.260214 systemd-networkd[1479]: lo: Gained carrier Nov 5 23:41:46.261262 systemd-networkd[1479]: Enumeration completed Nov 5 23:41:46.261439 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:41:46.261699 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:41:46.261769 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:41:46.268699 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 23:41:46.276375 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 23:41:46.304586 kernel: mlx5_core 7de1:00:02.0 enP32225s1: Link up Nov 5 23:41:46.321592 kernel: loop4: detected capacity change from 0 to 211168 Nov 5 23:41:46.328762 kernel: hv_netvsc 00224876-cad9-0022-4876-cad900224876 eth0: Data path switched to VF: enP32225s1 Nov 5 23:41:46.329052 systemd-networkd[1479]: enP32225s1: Link UP Nov 5 23:41:46.329187 systemd-networkd[1479]: eth0: Link UP Nov 5 23:41:46.329190 systemd-networkd[1479]: eth0: Gained carrier Nov 5 23:41:46.329220 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:41:46.331805 systemd-networkd[1479]: enP32225s1: Gained carrier Nov 5 23:41:46.338647 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 5 23:41:46.347771 kernel: loop5: detected capacity change from 0 to 119368 Nov 5 23:41:46.383632 kernel: loop6: detected capacity change from 0 to 100632 Nov 5 23:41:46.401601 kernel: loop7: detected capacity change from 0 to 27936 Nov 5 23:41:46.409533 (sd-merge)[1547]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 5 23:41:46.410404 (sd-merge)[1547]: Merged extensions into '/usr'. Nov 5 23:41:46.431427 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 23:41:46.439879 systemd[1]: Reload requested from client PID 1429 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 23:41:46.439892 systemd[1]: Reloading... Nov 5 23:41:46.460631 kernel: MACsec IEEE 802.1AE Nov 5 23:41:46.504640 zram_generator::config[1647]: No configuration found. Nov 5 23:41:46.647350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 5 23:41:46.652539 systemd[1]: Reloading finished in 212 ms. Nov 5 23:41:46.668748 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 23:41:46.694646 systemd[1]: Starting ensure-sysext.service... Nov 5 23:41:46.701736 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 23:41:46.708275 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:41:46.723171 systemd[1]: Reload requested from client PID 1689 ('systemctl') (unit ensure-sysext.service)... Nov 5 23:41:46.723188 systemd[1]: Reloading... Nov 5 23:41:46.781027 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 23:41:46.781394 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 23:41:46.781746 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 23:41:46.782008 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 23:41:46.782605 zram_generator::config[1725]: No configuration found. Nov 5 23:41:46.782540 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 23:41:46.782916 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Nov 5 23:41:46.783027 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Nov 5 23:41:46.874041 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:41:46.874053 systemd-tmpfiles[1691]: Skipping /boot Nov 5 23:41:46.879163 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:41:46.879252 systemd-tmpfiles[1691]: Skipping /boot Nov 5 23:41:46.929500 systemd[1]: Reloading finished in 206 ms. Nov 5 23:41:46.942591 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 23:41:46.960200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:41:46.970452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:41:47.003755 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 23:41:47.018214 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 23:41:47.025451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:41:47.030847 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 23:41:47.037236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:41:47.043746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:41:47.049808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:41:47.056642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:41:47.060668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:41:47.060755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:41:47.061442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:41:47.061596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:41:47.067597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:41:47.067708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:41:47.073221 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:41:47.073335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:41:47.082758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:41:47.085753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:41:47.099745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:41:47.105762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:41:47.110700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:41:47.110915 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:41:47.120862 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 23:41:47.127274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:41:47.128660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:41:47.134320 systemd-resolved[1787]: Positive Trust Anchors: Nov 5 23:41:47.134331 systemd-resolved[1787]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:41:47.134350 systemd-resolved[1787]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:41:47.135952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:41:47.136079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:41:47.141483 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:41:47.141611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:41:47.151863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:41:47.154287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:41:47.165970 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:41:47.171373 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:41:47.178132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:41:47.182260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:41:47.182715 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:41:47.182978 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 23:41:47.188873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:41:47.189100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:41:47.194716 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:41:47.194933 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:41:47.200071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:41:47.200191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:41:47.205199 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:41:47.205466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:41:47.206239 systemd-resolved[1787]: Using system hostname 'ci-4459.1.0-n-7f88f0cba0'. Nov 5 23:41:47.210377 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:41:47.217690 systemd[1]: Finished ensure-sysext.service. Nov 5 23:41:47.223525 systemd[1]: Reached target network.target - Network. Nov 5 23:41:47.227061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:41:47.231505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:41:47.231552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:41:47.269115 augenrules[1830]: No rules Nov 5 23:41:47.270192 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:41:47.270387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:41:47.338940 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 23:41:47.628681 systemd-networkd[1479]: eth0: Gained IPv6LL Nov 5 23:41:47.632645 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 23:41:47.637822 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 23:41:47.680444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:41:54.777829 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 23:41:54.784181 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 23:42:04.587251 ldconfig[1418]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 23:42:04.597074 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 23:42:04.603160 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 23:42:04.632402 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 23:42:04.637082 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:42:04.641427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 23:42:04.646438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 23:42:04.651543 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 23:42:04.655903 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 23:42:04.660856 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 23:42:04.665689 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 23:42:04.665720 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:42:04.669133 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:42:04.704781 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 23:42:04.710258 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 23:42:04.715522 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 23:42:04.721138 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 23:42:04.726944 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 23:42:04.740094 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 23:42:04.744429 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 23:42:04.749659 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 23:42:04.754093 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:42:04.757955 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:42:04.761690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:42:04.761717 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:42:04.778849 systemd[1]: Starting chronyd.service - NTP client/server... Nov 5 23:42:04.790664 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 23:42:04.798022 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 23:42:04.804677 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 23:42:04.811700 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 23:42:04.827187 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 23:42:04.832701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 23:42:04.836950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 23:42:04.837839 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 5 23:42:04.842043 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 5 23:42:04.842909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:42:04.848775 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 23:42:04.853507 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 23:42:04.858073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 23:42:04.863684 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 23:42:04.871418 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 23:42:04.872492 chronyd[1850]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 5 23:42:04.873490 KVP[1860]: KVP starting; pid is:1860 Nov 5 23:42:04.878059 jq[1858]: false Nov 5 23:42:04.878196 KVP[1860]: KVP LIC Version: 3.1 Nov 5 23:42:04.878611 kernel: hv_utils: KVP IC version 4.0 Nov 5 23:42:04.880374 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 23:42:04.884929 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 23:42:04.885246 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 23:42:04.887679 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 23:42:04.902744 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 23:42:04.909731 jq[1869]: true Nov 5 23:42:04.909922 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 23:42:04.915397 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 23:42:04.915545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 23:42:04.916661 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 23:42:04.917762 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 23:42:04.931639 jq[1873]: true Nov 5 23:42:05.032324 chronyd[1850]: Timezone right/UTC failed leap second check, ignoring Nov 5 23:42:05.032721 systemd[1]: Started chronyd.service - NTP client/server. Nov 5 23:42:05.032493 chronyd[1850]: Loaded seccomp filter (level 2) Nov 5 23:42:05.093589 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 23:42:05.094855 (ntainerd)[1909]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 23:42:05.095361 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 23:42:05.156732 systemd-logind[1867]: New seat seat0. Nov 5 23:42:05.160739 systemd-logind[1867]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 23:42:05.160914 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 23:42:05.170907 extend-filesystems[1859]: Found /dev/sda6 Nov 5 23:42:05.177270 update_engine[1868]: I20251105 23:42:05.174178 1868 main.cc:92] Flatcar Update Engine starting Nov 5 23:42:05.186410 bash[1898]: Updated "/home/core/.ssh/authorized_keys" Nov 5 23:42:05.187722 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 23:42:05.194010 extend-filesystems[1859]: Found /dev/sda9 Nov 5 23:42:05.198778 extend-filesystems[1859]: Checking size of /dev/sda9 Nov 5 23:42:05.197028 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 23:42:05.206928 tar[1872]: linux-arm64/LICENSE Nov 5 23:42:05.206928 tar[1872]: linux-arm64/helm Nov 5 23:42:05.204352 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 23:42:05.261011 extend-filesystems[1859]: Old size kept for /dev/sda9 Nov 5 23:42:05.267310 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 23:42:05.267501 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 23:42:05.532848 dbus-daemon[1853]: [system] SELinux support is enabled Nov 5 23:42:05.533064 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 23:42:05.541522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 23:42:05.541546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 23:42:05.542681 update_engine[1868]: I20251105 23:42:05.541752 1868 update_check_scheduler.cc:74] Next update check in 5m47s Nov 5 23:42:05.542404 dbus-daemon[1853]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 23:42:05.550526 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 23:42:05.550547 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 23:42:05.558929 systemd[1]: Started update-engine.service - Update Engine. Nov 5 23:42:05.567128 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 23:42:05.580528 tar[1872]: linux-arm64/README.md Nov 5 23:42:05.594486 coreos-metadata[1852]: Nov 05 23:42:05.594 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 5 23:42:05.598886 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 23:42:05.603046 coreos-metadata[1852]: Nov 05 23:42:05.603 INFO Fetch successful Nov 5 23:42:05.603209 coreos-metadata[1852]: Nov 05 23:42:05.603 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 5 23:42:05.606668 coreos-metadata[1852]: Nov 05 23:42:05.606 INFO Fetch successful Nov 5 23:42:05.606954 coreos-metadata[1852]: Nov 05 23:42:05.606 INFO Fetching http://168.63.129.16/machine/09a2d451-a345-4505-bcdd-14c4ef64fb19/fb0d4888%2D9d23%2D46ee%2D9a92%2De7f3d33d1baa.%5Fci%2D4459.1.0%2Dn%2D7f88f0cba0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 5 23:42:05.608798 coreos-metadata[1852]: Nov 05 23:42:05.608 INFO Fetch successful Nov 5 23:42:05.608798 coreos-metadata[1852]: Nov 05 23:42:05.608 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 5 23:42:05.616532 coreos-metadata[1852]: Nov 05 23:42:05.616 INFO Fetch successful Nov 5 23:42:05.645479 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 23:42:05.651198 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 23:42:05.690676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:42:05.704872 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:42:05.736320 locksmithd[1990]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 23:42:05.877298 sshd_keygen[1912]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 23:42:05.897055 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 23:42:05.903845 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 23:42:05.914795 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 5 23:42:05.923821 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 23:42:05.923996 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 23:42:05.932758 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 23:42:05.948699 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 5 23:42:05.967492 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 23:42:05.974151 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 23:42:05.981808 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 23:42:05.986964 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 23:42:06.088303 containerd[1909]: time="2025-11-05T23:42:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 23:42:06.088922 containerd[1909]: time="2025-11-05T23:42:06.088826744Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 23:42:06.096653 containerd[1909]: time="2025-11-05T23:42:06.096619240Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.68µs" Nov 5 23:42:06.096766 containerd[1909]: time="2025-11-05T23:42:06.096751096Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 23:42:06.096823 containerd[1909]: time="2025-11-05T23:42:06.096811424Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 23:42:06.097006 containerd[1909]: time="2025-11-05T23:42:06.096990464Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 23:42:06.097085 containerd[1909]: time="2025-11-05T23:42:06.097067848Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 23:42:06.097144 containerd[1909]: time="2025-11-05T23:42:06.097132928Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:42:06.097252 containerd[1909]: time="2025-11-05T23:42:06.097236376Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:42:06.097645 containerd[1909]: time="2025-11-05T23:42:06.097629352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:42:06.097971 containerd[1909]: time="2025-11-05T23:42:06.097947944Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:42:06.098684 containerd[1909]: time="2025-11-05T23:42:06.098658184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:42:06.098736 containerd[1909]: time="2025-11-05T23:42:06.098687192Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:42:06.098736 containerd[1909]: time="2025-11-05T23:42:06.098695336Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 23:42:06.098897 containerd[1909]: time="2025-11-05T23:42:06.098788600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 23:42:06.098980 containerd[1909]: time="2025-11-05T23:42:06.098958264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:42:06.098996 containerd[1909]: time="2025-11-05T23:42:06.098989368Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:42:06.099013 containerd[1909]: time="2025-11-05T23:42:06.098997352Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 23:42:06.099028 containerd[1909]: time="2025-11-05T23:42:06.099020184Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 23:42:06.099172 containerd[1909]: time="2025-11-05T23:42:06.099158192Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 23:42:06.099231 containerd[1909]: time="2025-11-05T23:42:06.099216248Z" level=info msg="metadata content store policy set" policy=shared Nov 5 23:42:06.101899 kubelet[2004]: E1105 23:42:06.101841 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:42:06.104019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:42:06.104124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:42:06.104463 systemd[1]: kubelet.service: Consumed 547ms CPU time, 257.3M memory peak. Nov 5 23:42:06.115668 containerd[1909]: time="2025-11-05T23:42:06.115628312Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 23:42:06.115719 containerd[1909]: time="2025-11-05T23:42:06.115692768Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 23:42:06.115719 containerd[1909]: time="2025-11-05T23:42:06.115708072Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 23:42:06.115759 containerd[1909]: time="2025-11-05T23:42:06.115719744Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 23:42:06.115759 containerd[1909]: time="2025-11-05T23:42:06.115728424Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 23:42:06.115759 containerd[1909]: time="2025-11-05T23:42:06.115735328Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 23:42:06.115759 containerd[1909]: time="2025-11-05T23:42:06.115743160Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 23:42:06.115759 containerd[1909]: time="2025-11-05T23:42:06.115750824Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 23:42:06.115759 containerd[1909]: time="2025-11-05T23:42:06.115758512Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 23:42:06.115833 containerd[1909]: time="2025-11-05T23:42:06.115766184Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 23:42:06.115833 containerd[1909]: time="2025-11-05T23:42:06.115773072Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 23:42:06.115833 containerd[1909]: time="2025-11-05T23:42:06.115781336Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 23:42:06.115961 containerd[1909]: time="2025-11-05T23:42:06.115939528Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 23:42:06.116023 containerd[1909]: time="2025-11-05T23:42:06.116010728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 23:42:06.116091 containerd[1909]: time="2025-11-05T23:42:06.116079440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 23:42:06.116207 containerd[1909]: time="2025-11-05T23:42:06.116138760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 23:42:06.116207 containerd[1909]: time="2025-11-05T23:42:06.116156600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 23:42:06.116207 containerd[1909]: time="2025-11-05T23:42:06.116166384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 23:42:06.116207 containerd[1909]: time="2025-11-05T23:42:06.116173992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 23:42:06.116207 containerd[1909]: time="2025-11-05T23:42:06.116185552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 23:42:06.116377 containerd[1909]: time="2025-11-05T23:42:06.116193432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 23:42:06.116377 containerd[1909]: time="2025-11-05T23:42:06.116313552Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 23:42:06.116377 containerd[1909]: time="2025-11-05T23:42:06.116326088Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 23:42:06.116482 containerd[1909]: time="2025-11-05T23:42:06.116468608Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 23:42:06.116536 containerd[1909]: time="2025-11-05T23:42:06.116527792Z" level=info msg="Start snapshots syncer" Nov 5 23:42:06.116664 containerd[1909]: time="2025-11-05T23:42:06.116611776Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 23:42:06.116948 containerd[1909]: time="2025-11-05T23:42:06.116915440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 23:42:06.117133 containerd[1909]: time="2025-11-05T23:42:06.117079488Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 23:42:06.117229 containerd[1909]: time="2025-11-05T23:42:06.117215080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 23:42:06.117449 containerd[1909]: time="2025-11-05T23:42:06.117433464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 23:42:06.117588 containerd[1909]: time="2025-11-05T23:42:06.117505528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 23:42:06.117588 containerd[1909]: time="2025-11-05T23:42:06.117519264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 23:42:06.117588 containerd[1909]: time="2025-11-05T23:42:06.117528832Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 23:42:06.117588 containerd[1909]: time="2025-11-05T23:42:06.117537376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 23:42:06.117588 containerd[1909]: time="2025-11-05T23:42:06.117544216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 23:42:06.117588 containerd[1909]: time="2025-11-05T23:42:06.117550944Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 23:42:06.117733 containerd[1909]: time="2025-11-05T23:42:06.117715912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 23:42:06.117800 containerd[1909]: time="2025-11-05T23:42:06.117787600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 23:42:06.117885 containerd[1909]: time="2025-11-05T23:42:06.117839648Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 23:42:06.117989 containerd[1909]: time="2025-11-05T23:42:06.117975600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118022032Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118031288Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118038200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118043840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118050400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118057240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118070000Z" level=info msg="runtime interface created" Nov 5 23:42:06.118106 containerd[1909]: time="2025-11-05T23:42:06.118073304Z" level=info msg="created NRI interface" Nov 5 23:42:06.118311 containerd[1909]: time="2025-11-05T23:42:06.118078760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 23:42:06.118311 containerd[1909]: time="2025-11-05T23:42:06.118252744Z" level=info msg="Connect containerd service" Nov 5 23:42:06.118311 containerd[1909]: time="2025-11-05T23:42:06.118279496Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 23:42:06.119123 containerd[1909]: time="2025-11-05T23:42:06.119094264Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 23:42:06.441027 containerd[1909]: time="2025-11-05T23:42:06.440923240Z" level=info msg="Start subscribing containerd event" Nov 5 23:42:06.441027 containerd[1909]: time="2025-11-05T23:42:06.440991752Z" level=info msg="Start recovering state" Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441348736Z" level=info msg="Start event monitor" Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441365816Z" level=info msg="Start cni network conf syncer for default" Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441373624Z" level=info msg="Start streaming server" Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441379560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441384216Z" level=info msg="runtime interface starting up..." Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441388064Z" level=info msg="starting plugins..." Nov 5 23:42:06.441520 containerd[1909]: time="2025-11-05T23:42:06.441398696Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 23:42:06.441769 containerd[1909]: time="2025-11-05T23:42:06.441749296Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 23:42:06.441856 containerd[1909]: time="2025-11-05T23:42:06.441846192Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 23:42:06.442044 containerd[1909]: time="2025-11-05T23:42:06.442029080Z" level=info msg="containerd successfully booted in 0.354098s" Nov 5 23:42:06.442146 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 23:42:06.446686 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 23:42:06.451147 systemd[1]: Startup finished in 1.589s (kernel) + 15.255s (initrd) + 58.072s (userspace) = 1min 14.917s. Nov 5 23:42:07.304542 login[2040]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 5 23:42:07.305184 login[2039]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:07.315078 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 23:42:07.318686 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 23:42:07.320795 systemd-logind[1867]: New session 2 of user core. Nov 5 23:42:07.349385 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 23:42:07.351634 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 23:42:07.379210 (systemd)[2062]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 23:42:07.381123 systemd-logind[1867]: New session c1 of user core. Nov 5 23:42:07.683138 systemd[2062]: Queued start job for default target default.target. Nov 5 23:42:07.693365 systemd[2062]: Created slice app.slice - User Application Slice. Nov 5 23:42:07.693393 systemd[2062]: Reached target paths.target - Paths. Nov 5 23:42:07.693423 systemd[2062]: Reached target timers.target - Timers. Nov 5 23:42:07.694379 systemd[2062]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 23:42:07.703702 systemd[2062]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 23:42:07.703748 systemd[2062]: Reached target sockets.target - Sockets. Nov 5 23:42:07.703785 systemd[2062]: Reached target basic.target - Basic System. Nov 5 23:42:07.703805 systemd[2062]: Reached target default.target - Main User Target. Nov 5 23:42:07.703824 systemd[2062]: Startup finished in 317ms. Nov 5 23:42:07.703889 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 23:42:07.705863 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 23:42:08.261850 waagent[2037]: 2025-11-05T23:42:08.261771Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 5 23:42:08.266073 waagent[2037]: 2025-11-05T23:42:08.266037Z INFO Daemon Daemon OS: flatcar 4459.1.0 Nov 5 23:42:08.269590 waagent[2037]: 2025-11-05T23:42:08.269556Z INFO Daemon Daemon Python: 3.11.13 Nov 5 23:42:08.272745 waagent[2037]: 2025-11-05T23:42:08.272711Z INFO Daemon Daemon Run daemon Nov 5 23:42:08.275808 waagent[2037]: 2025-11-05T23:42:08.275774Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Nov 5 23:42:08.282121 waagent[2037]: 2025-11-05T23:42:08.282093Z INFO Daemon Daemon Using waagent for provisioning Nov 5 23:42:08.285913 waagent[2037]: 2025-11-05T23:42:08.285885Z INFO Daemon Daemon Activate resource disk Nov 5 23:42:08.287082 waagent[2037]: 2025-11-05T23:42:08.287057Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 5 23:42:08.289272 waagent[2037]: 2025-11-05T23:42:08.289240Z INFO Daemon Daemon Found device: None Nov 5 23:42:08.289530 waagent[2037]: 2025-11-05T23:42:08.289506Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 5 23:42:08.289748 waagent[2037]: 2025-11-05T23:42:08.289726Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 5 23:42:08.290322 waagent[2037]: 2025-11-05T23:42:08.290289Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 5 23:42:08.290575 waagent[2037]: 2025-11-05T23:42:08.290550Z INFO Daemon Daemon Running default provisioning handler Nov 5 23:42:08.315047 login[2040]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:08.315400 waagent[2037]: 2025-11-05T23:42:08.315353Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 5 23:42:08.326415 waagent[2037]: 2025-11-05T23:42:08.326252Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 5 23:42:08.334088 waagent[2037]: 2025-11-05T23:42:08.333207Z INFO Daemon Daemon cloud-init is enabled: False Nov 5 23:42:08.337033 systemd-logind[1867]: New session 1 of user core. Nov 5 23:42:08.337422 waagent[2037]: 2025-11-05T23:42:08.337336Z INFO Daemon Daemon Copying ovf-env.xml Nov 5 23:42:08.342699 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 23:42:08.501058 waagent[2037]: 2025-11-05T23:42:08.500995Z INFO Daemon Daemon Successfully mounted dvd Nov 5 23:42:08.552330 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 5 23:42:08.553079 waagent[2037]: 2025-11-05T23:42:08.553042Z INFO Daemon Daemon Detect protocol endpoint Nov 5 23:42:08.556616 waagent[2037]: 2025-11-05T23:42:08.556587Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 5 23:42:08.560669 waagent[2037]: 2025-11-05T23:42:08.560642Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 5 23:42:08.565164 waagent[2037]: 2025-11-05T23:42:08.565142Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 5 23:42:08.568929 waagent[2037]: 2025-11-05T23:42:08.568903Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 5 23:42:08.572441 waagent[2037]: 2025-11-05T23:42:08.572418Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 5 23:42:08.618473 waagent[2037]: 2025-11-05T23:42:08.618436Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 5 23:42:08.623128 waagent[2037]: 2025-11-05T23:42:08.623108Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 5 23:42:08.626911 waagent[2037]: 2025-11-05T23:42:08.626888Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 5 23:42:08.804671 waagent[2037]: 2025-11-05T23:42:08.803551Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 5 23:42:08.808364 waagent[2037]: 2025-11-05T23:42:08.808328Z INFO Daemon Daemon Forcing an update of the goal state. Nov 5 23:42:08.814887 waagent[2037]: 2025-11-05T23:42:08.814853Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 5 23:42:08.829161 waagent[2037]: 2025-11-05T23:42:08.829131Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 5 23:42:08.833206 waagent[2037]: 2025-11-05T23:42:08.833174Z INFO Daemon Nov 5 23:42:08.835195 waagent[2037]: 2025-11-05T23:42:08.835168Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4b7d69e0-a006-46af-90fd-d2e335a2df44 eTag: 16131863231249901295 source: Fabric] Nov 5 23:42:08.843862 waagent[2037]: 2025-11-05T23:42:08.843832Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 5 23:42:08.848825 waagent[2037]: 2025-11-05T23:42:08.848797Z INFO Daemon Nov 5 23:42:08.850822 waagent[2037]: 2025-11-05T23:42:08.850798Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 5 23:42:08.858055 waagent[2037]: 2025-11-05T23:42:08.858027Z INFO Daemon Daemon Downloading artifacts profile blob Nov 5 23:42:08.913852 waagent[2037]: 2025-11-05T23:42:08.913800Z INFO Daemon Downloaded certificate {'thumbprint': '092D73DA31A7CDAF0CEA708C8296DC1DA73AF788', 'hasPrivateKey': True} Nov 5 23:42:08.920940 waagent[2037]: 2025-11-05T23:42:08.920904Z INFO Daemon Fetch goal state completed Nov 5 23:42:08.929220 waagent[2037]: 2025-11-05T23:42:08.929188Z INFO Daemon Daemon Starting provisioning Nov 5 23:42:08.932959 waagent[2037]: 2025-11-05T23:42:08.932924Z INFO Daemon Daemon Handle ovf-env.xml. Nov 5 23:42:08.936403 waagent[2037]: 2025-11-05T23:42:08.936379Z INFO Daemon Daemon Set hostname [ci-4459.1.0-n-7f88f0cba0] Nov 5 23:42:08.971481 waagent[2037]: 2025-11-05T23:42:08.971444Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-n-7f88f0cba0] Nov 5 23:42:08.975909 waagent[2037]: 2025-11-05T23:42:08.975875Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 5 23:42:08.980376 waagent[2037]: 2025-11-05T23:42:08.980344Z INFO Daemon Daemon Primary interface is [eth0] Nov 5 23:42:09.005629 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:42:09.005637 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:42:09.005676 systemd-networkd[1479]: eth0: DHCP lease lost Nov 5 23:42:09.006151 waagent[2037]: 2025-11-05T23:42:09.006101Z INFO Daemon Daemon Create user account if not exists Nov 5 23:42:09.010432 waagent[2037]: 2025-11-05T23:42:09.010396Z INFO Daemon Daemon User core already exists, skip useradd Nov 5 23:42:09.014557 waagent[2037]: 2025-11-05T23:42:09.014514Z INFO Daemon Daemon Configure sudoer Nov 5 23:42:09.021884 waagent[2037]: 2025-11-05T23:42:09.021844Z INFO Daemon Daemon Configure sshd Nov 5 23:42:09.026612 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 5 23:42:09.027947 waagent[2037]: 2025-11-05T23:42:09.027828Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 5 23:42:09.036514 waagent[2037]: 2025-11-05T23:42:09.036480Z INFO Daemon Daemon Deploy ssh public key. Nov 5 23:42:10.172193 waagent[2037]: 2025-11-05T23:42:10.172147Z INFO Daemon Daemon Provisioning complete Nov 5 23:42:10.184390 waagent[2037]: 2025-11-05T23:42:10.184357Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 5 23:42:10.189300 waagent[2037]: 2025-11-05T23:42:10.189269Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 5 23:42:10.196597 waagent[2037]: 2025-11-05T23:42:10.196564Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 5 23:42:10.753542 waagent[2112]: 2025-11-05T23:42:10.753474Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 5 23:42:10.754616 waagent[2112]: 2025-11-05T23:42:10.753995Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Nov 5 23:42:10.754616 waagent[2112]: 2025-11-05T23:42:10.754051Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 5 23:42:10.754616 waagent[2112]: 2025-11-05T23:42:10.754089Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Nov 5 23:42:15.153756 waagent[2112]: 2025-11-05T23:42:15.153674Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 5 23:42:15.154081 waagent[2112]: 2025-11-05T23:42:15.153898Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 23:42:15.154081 waagent[2112]: 2025-11-05T23:42:15.153944Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 23:42:15.159182 waagent[2112]: 2025-11-05T23:42:15.159134Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 5 23:42:15.166502 waagent[2112]: 2025-11-05T23:42:15.166473Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 5 23:42:15.166866 waagent[2112]: 2025-11-05T23:42:15.166835Z INFO ExtHandler Nov 5 23:42:15.166919 waagent[2112]: 2025-11-05T23:42:15.166901Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: aa02692a-f06f-4f50-a41d-0c25a54063ad eTag: 16131863231249901295 source: Fabric] Nov 5 23:42:15.167127 waagent[2112]: 2025-11-05T23:42:15.167102Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 5 23:42:15.167508 waagent[2112]: 2025-11-05T23:42:15.167479Z INFO ExtHandler Nov 5 23:42:15.167547 waagent[2112]: 2025-11-05T23:42:15.167530Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 5 23:42:15.170065 waagent[2112]: 2025-11-05T23:42:15.170038Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 5 23:42:15.312743 waagent[2112]: 2025-11-05T23:42:15.312670Z INFO ExtHandler Downloaded certificate {'thumbprint': '092D73DA31A7CDAF0CEA708C8296DC1DA73AF788', 'hasPrivateKey': True} Nov 5 23:42:15.313130 waagent[2112]: 2025-11-05T23:42:15.313094Z INFO ExtHandler Fetch goal state completed Nov 5 23:42:15.323746 waagent[2112]: 2025-11-05T23:42:15.323703Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 5 23:42:15.326844 waagent[2112]: 2025-11-05T23:42:15.326802Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2112 Nov 5 23:42:15.326942 waagent[2112]: 2025-11-05T23:42:15.326916Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 5 23:42:15.327171 waagent[2112]: 2025-11-05T23:42:15.327145Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 5 23:42:15.328224 waagent[2112]: 2025-11-05T23:42:15.328189Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 5 23:42:15.328530 waagent[2112]: 2025-11-05T23:42:15.328500Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 5 23:42:15.328669 waagent[2112]: 2025-11-05T23:42:15.328643Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 5 23:42:15.329076 waagent[2112]: 2025-11-05T23:42:15.329047Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 5 23:42:15.398918 waagent[2112]: 2025-11-05T23:42:15.398591Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 5 23:42:15.398918 waagent[2112]: 2025-11-05T23:42:15.398743Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 5 23:42:15.403085 waagent[2112]: 2025-11-05T23:42:15.402936Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 5 23:42:15.407286 systemd[1]: Reload requested from client PID 2129 ('systemctl') (unit waagent.service)... Nov 5 23:42:15.407547 systemd[1]: Reloading... Nov 5 23:42:15.480617 zram_generator::config[2174]: No configuration found. Nov 5 23:42:15.609229 systemd[1]: Reloading finished in 201 ms. Nov 5 23:42:15.631408 waagent[2112]: 2025-11-05T23:42:15.631343Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 5 23:42:15.631487 waagent[2112]: 2025-11-05T23:42:15.631474Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 5 23:42:16.008070 waagent[2112]: 2025-11-05T23:42:16.007990Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 5 23:42:16.008337 waagent[2112]: 2025-11-05T23:42:16.008302Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 5 23:42:16.008972 waagent[2112]: 2025-11-05T23:42:16.008930Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 5 23:42:16.009236 waagent[2112]: 2025-11-05T23:42:16.009201Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 5 23:42:16.009590 waagent[2112]: 2025-11-05T23:42:16.009412Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 23:42:16.009590 waagent[2112]: 2025-11-05T23:42:16.009482Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 23:42:16.009873 waagent[2112]: 2025-11-05T23:42:16.009780Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 5 23:42:16.009971 waagent[2112]: 2025-11-05T23:42:16.009849Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 5 23:42:16.010131 waagent[2112]: 2025-11-05T23:42:16.009965Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 5 23:42:16.010131 waagent[2112]: 2025-11-05T23:42:16.010074Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 23:42:16.010394 waagent[2112]: 2025-11-05T23:42:16.010364Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 5 23:42:16.010394 waagent[2112]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 5 23:42:16.010394 waagent[2112]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 5 23:42:16.010394 waagent[2112]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 5 23:42:16.010394 waagent[2112]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 5 23:42:16.010394 waagent[2112]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 5 23:42:16.010394 waagent[2112]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 5 23:42:16.011150 waagent[2112]: 2025-11-05T23:42:16.010598Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 23:42:16.011150 waagent[2112]: 2025-11-05T23:42:16.010716Z INFO EnvHandler ExtHandler Configure routes Nov 5 23:42:16.011150 waagent[2112]: 2025-11-05T23:42:16.010760Z INFO EnvHandler ExtHandler Gateway:None Nov 5 23:42:16.011150 waagent[2112]: 2025-11-05T23:42:16.010784Z INFO EnvHandler ExtHandler Routes:None Nov 5 23:42:16.011366 waagent[2112]: 2025-11-05T23:42:16.011334Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 5 23:42:16.011497 waagent[2112]: 2025-11-05T23:42:16.011474Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 5 23:42:16.011630 waagent[2112]: 2025-11-05T23:42:16.011583Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 5 23:42:16.017626 waagent[2112]: 2025-11-05T23:42:16.017590Z INFO ExtHandler ExtHandler Nov 5 23:42:16.017684 waagent[2112]: 2025-11-05T23:42:16.017652Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5ddb0bf0-77b5-4a22-8963-9a11db36cfb6 correlation fbed2e77-dbcc-4df8-9c16-e98f0717a0b1 created: 2025-11-05T23:40:04.987984Z] Nov 5 23:42:16.017920 waagent[2112]: 2025-11-05T23:42:16.017888Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 5 23:42:16.018299 waagent[2112]: 2025-11-05T23:42:16.018269Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 5 23:42:16.086570 waagent[2112]: 2025-11-05T23:42:16.086527Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 5 23:42:16.086570 waagent[2112]: Try `iptables -h' or 'iptables --help' for more information.) Nov 5 23:42:16.087225 waagent[2112]: 2025-11-05T23:42:16.086979Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8D78537B-5771-4575-B09B-0EBDA31A1C8A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 5 23:42:16.087748 waagent[2112]: 2025-11-05T23:42:16.087710Z INFO MonitorHandler ExtHandler Network interfaces: Nov 5 23:42:16.087748 waagent[2112]: Executing ['ip', '-a', '-o', 'link']: Nov 5 23:42:16.087748 waagent[2112]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 5 23:42:16.087748 waagent[2112]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:ca:d9 brd ff:ff:ff:ff:ff:ff Nov 5 23:42:16.087748 waagent[2112]: 3: enP32225s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:ca:d9 brd ff:ff:ff:ff:ff:ff\ altname enP32225p0s2 Nov 5 23:42:16.087748 waagent[2112]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 5 23:42:16.087748 waagent[2112]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 5 23:42:16.087748 waagent[2112]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 5 23:42:16.087748 waagent[2112]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 5 23:42:16.087748 waagent[2112]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 5 23:42:16.087748 waagent[2112]: 2: eth0 inet6 fe80::222:48ff:fe76:cad9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 5 23:42:16.296528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 23:42:16.297844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:42:16.538412 waagent[2112]: 2025-11-05T23:42:16.538339Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 5 23:42:16.538412 waagent[2112]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 23:42:16.538412 waagent[2112]: pkts bytes target prot opt in out source destination Nov 5 23:42:16.538412 waagent[2112]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 5 23:42:16.538412 waagent[2112]: pkts bytes target prot opt in out source destination Nov 5 23:42:16.538412 waagent[2112]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 23:42:16.538412 waagent[2112]: pkts bytes target prot opt in out source destination Nov 5 23:42:16.538412 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 5 23:42:16.538412 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 5 23:42:16.538412 waagent[2112]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 5 23:42:16.540819 waagent[2112]: 2025-11-05T23:42:16.540771Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 5 23:42:16.540819 waagent[2112]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 23:42:16.540819 waagent[2112]: pkts bytes target prot opt in out source destination Nov 5 23:42:16.540819 waagent[2112]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 5 23:42:16.540819 waagent[2112]: pkts bytes target prot opt in out source destination Nov 5 23:42:16.540819 waagent[2112]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 23:42:16.540819 waagent[2112]: pkts bytes target prot opt in out source destination Nov 5 23:42:16.540819 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 5 23:42:16.540819 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 5 23:42:16.540819 waagent[2112]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 5 23:42:16.541005 waagent[2112]: 2025-11-05T23:42:16.540979Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 5 23:42:16.964085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:42:16.970927 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:42:17.002367 kubelet[2263]: E1105 23:42:17.002338 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:42:17.005073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:42:17.005185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:42:17.005894 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.5M memory peak. Nov 5 23:42:22.467749 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 23:42:22.469757 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:55148.service - OpenSSH per-connection server daemon (10.200.16.10:55148). Nov 5 23:42:23.090406 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 55148 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:23.091532 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:23.095447 systemd-logind[1867]: New session 3 of user core. Nov 5 23:42:23.101681 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 23:42:23.509961 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:55156.service - OpenSSH per-connection server daemon (10.200.16.10:55156). Nov 5 23:42:23.963811 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 55156 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:23.964900 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:23.968291 systemd-logind[1867]: New session 4 of user core. Nov 5 23:42:23.978690 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 23:42:24.282688 sshd[2281]: Connection closed by 10.200.16.10 port 55156 Nov 5 23:42:24.283368 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Nov 5 23:42:24.286321 systemd-logind[1867]: Session 4 logged out. Waiting for processes to exit. Nov 5 23:42:24.286855 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:55156.service: Deactivated successfully. Nov 5 23:42:24.288730 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 23:42:24.290391 systemd-logind[1867]: Removed session 4. Nov 5 23:42:24.375661 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:55172.service - OpenSSH per-connection server daemon (10.200.16.10:55172). Nov 5 23:42:24.828766 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 55172 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:24.829810 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:24.833293 systemd-logind[1867]: New session 5 of user core. Nov 5 23:42:24.859670 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 23:42:25.164857 sshd[2290]: Connection closed by 10.200.16.10 port 55172 Nov 5 23:42:25.165387 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Nov 5 23:42:25.168747 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:55172.service: Deactivated successfully. Nov 5 23:42:25.170310 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 23:42:25.170931 systemd-logind[1867]: Session 5 logged out. Waiting for processes to exit. Nov 5 23:42:25.171925 systemd-logind[1867]: Removed session 5. Nov 5 23:42:25.245758 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:55182.service - OpenSSH per-connection server daemon (10.200.16.10:55182). Nov 5 23:42:25.701652 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 55182 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:25.702779 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:25.706055 systemd-logind[1867]: New session 6 of user core. Nov 5 23:42:25.714746 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 23:42:26.039207 sshd[2299]: Connection closed by 10.200.16.10 port 55182 Nov 5 23:42:26.038516 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Nov 5 23:42:26.041364 systemd-logind[1867]: Session 6 logged out. Waiting for processes to exit. Nov 5 23:42:26.041644 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:55182.service: Deactivated successfully. Nov 5 23:42:26.043218 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 23:42:26.045021 systemd-logind[1867]: Removed session 6. Nov 5 23:42:26.126188 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:55192.service - OpenSSH per-connection server daemon (10.200.16.10:55192). Nov 5 23:42:26.583224 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 55192 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:26.584262 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:26.587803 systemd-logind[1867]: New session 7 of user core. Nov 5 23:42:26.594857 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 23:42:26.982874 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 23:42:26.983097 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:42:27.007865 sudo[2309]: pam_unix(sudo:session): session closed for user root Nov 5 23:42:27.046546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 23:42:27.047972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:42:27.076601 sshd[2308]: Connection closed by 10.200.16.10 port 55192 Nov 5 23:42:27.076623 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Nov 5 23:42:27.080544 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:55192.service: Deactivated successfully. Nov 5 23:42:27.081827 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 23:42:27.082375 systemd-logind[1867]: Session 7 logged out. Waiting for processes to exit. Nov 5 23:42:27.083993 systemd-logind[1867]: Removed session 7. Nov 5 23:42:27.169056 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:55196.service - OpenSSH per-connection server daemon (10.200.16.10:55196). Nov 5 23:42:27.400633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:42:27.411757 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:42:27.441431 kubelet[2326]: E1105 23:42:27.441383 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:42:27.443188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:42:27.443284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:42:27.443843 systemd[1]: kubelet.service: Consumed 104ms CPU time, 107.4M memory peak. Nov 5 23:42:27.629415 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 55196 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:27.630420 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:27.634119 systemd-logind[1867]: New session 8 of user core. Nov 5 23:42:27.643872 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 23:42:27.885296 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 23:42:27.885846 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:42:27.891799 sudo[2335]: pam_unix(sudo:session): session closed for user root Nov 5 23:42:27.895276 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 23:42:27.895465 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:42:27.902860 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:42:27.934067 augenrules[2357]: No rules Nov 5 23:42:27.935092 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:42:27.936615 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:42:27.938022 sudo[2334]: pam_unix(sudo:session): session closed for user root Nov 5 23:42:28.020979 sshd[2333]: Connection closed by 10.200.16.10 port 55196 Nov 5 23:42:28.021247 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Nov 5 23:42:28.025053 systemd-logind[1867]: Session 8 logged out. Waiting for processes to exit. Nov 5 23:42:28.025691 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:55196.service: Deactivated successfully. Nov 5 23:42:28.026988 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 23:42:28.028296 systemd-logind[1867]: Removed session 8. Nov 5 23:42:28.100779 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:55212.service - OpenSSH per-connection server daemon (10.200.16.10:55212). Nov 5 23:42:28.520343 sshd[2366]: Accepted publickey for core from 10.200.16.10 port 55212 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:42:28.521439 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:42:28.524831 systemd-logind[1867]: New session 9 of user core. Nov 5 23:42:28.535674 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 23:42:28.757708 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 23:42:28.757910 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:42:28.893998 chronyd[1850]: Selected source PHC0 Nov 5 23:42:30.629563 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 23:42:30.640817 (dockerd)[2388]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 23:42:31.815606 dockerd[2388]: time="2025-11-05T23:42:31.814715177Z" level=info msg="Starting up" Nov 5 23:42:31.816588 dockerd[2388]: time="2025-11-05T23:42:31.816297217Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 23:42:31.824198 dockerd[2388]: time="2025-11-05T23:42:31.824162585Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 23:42:33.293932 dockerd[2388]: time="2025-11-05T23:42:33.293890137Z" level=info msg="Loading containers: start." Nov 5 23:42:33.384593 kernel: Initializing XFRM netlink socket Nov 5 23:42:33.887267 systemd-networkd[1479]: docker0: Link UP Nov 5 23:42:33.928304 dockerd[2388]: time="2025-11-05T23:42:33.928179401Z" level=info msg="Loading containers: done." Nov 5 23:42:34.210231 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 5 23:42:34.414637 dockerd[2388]: time="2025-11-05T23:42:34.414498777Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 23:42:34.415243 dockerd[2388]: time="2025-11-05T23:42:34.414996281Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 23:42:34.415243 dockerd[2388]: time="2025-11-05T23:42:34.415101769Z" level=info msg="Initializing buildkit" Nov 5 23:42:34.623434 dockerd[2388]: time="2025-11-05T23:42:34.623339553Z" level=info msg="Completed buildkit initialization" Nov 5 23:42:34.628844 dockerd[2388]: time="2025-11-05T23:42:34.628803857Z" level=info msg="Daemon has completed initialization" Nov 5 23:42:34.628844 dockerd[2388]: time="2025-11-05T23:42:34.628868041Z" level=info msg="API listen on /run/docker.sock" Nov 5 23:42:34.629113 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 23:42:35.625433 containerd[1909]: time="2025-11-05T23:42:35.625381561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 23:42:37.392843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215978590.mount: Deactivated successfully. Nov 5 23:42:37.546497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 23:42:37.547922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:42:37.679641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:42:37.682469 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:42:37.830805 kubelet[2609]: E1105 23:42:37.830761 2609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:42:37.832996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:42:37.833107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:42:37.833625 systemd[1]: kubelet.service: Consumed 102ms CPU time, 107.1M memory peak. Nov 5 23:42:45.425322 containerd[1909]: time="2025-11-05T23:42:45.424744498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:45.427636 containerd[1909]: time="2025-11-05T23:42:45.427612778Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Nov 5 23:42:45.432736 containerd[1909]: time="2025-11-05T23:42:45.432715134Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:45.448180 containerd[1909]: time="2025-11-05T23:42:45.448147933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:45.448852 containerd[1909]: time="2025-11-05T23:42:45.448829756Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 9.823411475s" Nov 5 23:42:45.448944 containerd[1909]: time="2025-11-05T23:42:45.448931431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 5 23:42:45.450116 containerd[1909]: time="2025-11-05T23:42:45.450087574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 23:42:46.786226 containerd[1909]: time="2025-11-05T23:42:46.786176365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:46.790706 containerd[1909]: time="2025-11-05T23:42:46.790673044Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Nov 5 23:42:46.794409 containerd[1909]: time="2025-11-05T23:42:46.794374744Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:46.798565 containerd[1909]: time="2025-11-05T23:42:46.798530220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:46.799691 containerd[1909]: time="2025-11-05T23:42:46.799590816Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.349362477s" Nov 5 23:42:46.799691 containerd[1909]: time="2025-11-05T23:42:46.799618537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 5 23:42:46.800073 containerd[1909]: time="2025-11-05T23:42:46.800036871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 23:42:47.852648 containerd[1909]: time="2025-11-05T23:42:47.852546026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:47.854817 containerd[1909]: time="2025-11-05T23:42:47.854786245Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Nov 5 23:42:47.857659 containerd[1909]: time="2025-11-05T23:42:47.857623420Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:47.862361 containerd[1909]: time="2025-11-05T23:42:47.862325682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:47.862993 containerd[1909]: time="2025-11-05T23:42:47.862878805Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.062716842s" Nov 5 23:42:47.862993 containerd[1909]: time="2025-11-05T23:42:47.862903894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 5 23:42:47.863476 containerd[1909]: time="2025-11-05T23:42:47.863444960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 23:42:48.046720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 23:42:48.048537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:42:48.132284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:42:48.135409 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:42:48.237658 kubelet[2684]: E1105 23:42:48.237610 2684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:42:48.239682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:42:48.239790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:42:48.240288 systemd[1]: kubelet.service: Consumed 174ms CPU time, 105.9M memory peak. Nov 5 23:42:49.972458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128880828.mount: Deactivated successfully. Nov 5 23:42:50.541102 update_engine[1868]: I20251105 23:42:50.540608 1868 update_attempter.cc:509] Updating boot flags... Nov 5 23:42:55.964992 containerd[1909]: time="2025-11-05T23:42:55.964915246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:56.074727 containerd[1909]: time="2025-11-05T23:42:56.074663522Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Nov 5 23:42:57.471327 containerd[1909]: time="2025-11-05T23:42:57.471251799Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:57.476465 containerd[1909]: time="2025-11-05T23:42:57.476107774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:42:57.476465 containerd[1909]: time="2025-11-05T23:42:57.476362256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 9.612725689s" Nov 5 23:42:57.476465 containerd[1909]: time="2025-11-05T23:42:57.476385353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 5 23:42:57.477087 containerd[1909]: time="2025-11-05T23:42:57.477071620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 23:42:58.296615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 5 23:42:58.298545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:42:58.395616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:42:58.398523 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:42:58.423328 kubelet[2876]: E1105 23:42:58.423205 2876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:42:58.425297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:42:58.425407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:42:58.425788 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105.1M memory peak. Nov 5 23:43:07.374400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094233344.mount: Deactivated successfully. Nov 5 23:43:08.546645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 5 23:43:08.547787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:09.507226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:09.509656 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:43:09.533954 kubelet[2894]: E1105 23:43:09.533915 2894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:43:09.535915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:43:09.536107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:43:09.536700 systemd[1]: kubelet.service: Consumed 99ms CPU time, 104.7M memory peak. Nov 5 23:43:10.566859 containerd[1909]: time="2025-11-05T23:43:10.566808635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:10.575492 containerd[1909]: time="2025-11-05T23:43:10.575463113Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Nov 5 23:43:10.581539 containerd[1909]: time="2025-11-05T23:43:10.581509465Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:10.586178 containerd[1909]: time="2025-11-05T23:43:10.586147794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:10.586581 containerd[1909]: time="2025-11-05T23:43:10.586497158Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 13.109327238s" Nov 5 23:43:10.586581 containerd[1909]: time="2025-11-05T23:43:10.586523318Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 5 23:43:10.586924 containerd[1909]: time="2025-11-05T23:43:10.586902043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 23:43:11.110236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797880440.mount: Deactivated successfully. Nov 5 23:43:11.131161 containerd[1909]: time="2025-11-05T23:43:11.130715279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:43:11.135717 containerd[1909]: time="2025-11-05T23:43:11.135680699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 5 23:43:11.142530 containerd[1909]: time="2025-11-05T23:43:11.142483596Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:43:11.146650 containerd[1909]: time="2025-11-05T23:43:11.146604620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:43:11.147090 containerd[1909]: time="2025-11-05T23:43:11.146897294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 559.969937ms" Nov 5 23:43:11.147090 containerd[1909]: time="2025-11-05T23:43:11.146924462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 23:43:11.147302 containerd[1909]: time="2025-11-05T23:43:11.147283042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 23:43:11.719500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119214873.mount: Deactivated successfully. Nov 5 23:43:13.972621 containerd[1909]: time="2025-11-05T23:43:13.972017096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:13.976412 containerd[1909]: time="2025-11-05T23:43:13.976215315Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Nov 5 23:43:13.980664 containerd[1909]: time="2025-11-05T23:43:13.980637621Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:13.987080 containerd[1909]: time="2025-11-05T23:43:13.987048913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:13.987805 containerd[1909]: time="2025-11-05T23:43:13.987778113Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.840469365s" Nov 5 23:43:13.987893 containerd[1909]: time="2025-11-05T23:43:13.987879260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 5 23:43:17.334642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:17.335429 systemd[1]: kubelet.service: Consumed 99ms CPU time, 104.7M memory peak. Nov 5 23:43:17.343121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:17.357708 systemd[1]: Reload requested from client PID 3032 ('systemctl') (unit session-9.scope)... Nov 5 23:43:17.357821 systemd[1]: Reloading... Nov 5 23:43:17.447625 zram_generator::config[3095]: No configuration found. Nov 5 23:43:17.585569 systemd[1]: Reloading finished in 227 ms. Nov 5 23:43:18.064814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 23:43:18.064894 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 23:43:18.065135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:18.066634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:22.337875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:22.346899 (kubelet)[3144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:43:22.374145 kubelet[3144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:43:22.374145 kubelet[3144]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:43:22.374145 kubelet[3144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:43:22.374470 kubelet[3144]: I1105 23:43:22.374183 3144 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:43:22.734865 kubelet[3144]: I1105 23:43:22.734752 3144 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 23:43:22.734865 kubelet[3144]: I1105 23:43:22.734786 3144 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:43:22.735014 kubelet[3144]: I1105 23:43:22.734982 3144 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:43:22.748200 kubelet[3144]: E1105 23:43:22.748165 3144 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 23:43:22.751538 kubelet[3144]: I1105 23:43:22.751011 3144 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:43:22.757652 kubelet[3144]: I1105 23:43:22.757630 3144 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:43:22.760096 kubelet[3144]: I1105 23:43:22.760075 3144 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 23:43:22.761287 kubelet[3144]: I1105 23:43:22.761254 3144 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:43:22.761408 kubelet[3144]: I1105 23:43:22.761288 3144 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-7f88f0cba0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:43:22.761481 kubelet[3144]: I1105 23:43:22.761414 3144 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:43:22.761481 kubelet[3144]: I1105 23:43:22.761421 3144 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 23:43:22.762141 kubelet[3144]: I1105 23:43:22.762121 3144 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:43:22.764577 kubelet[3144]: I1105 23:43:22.764549 3144 kubelet.go:480] "Attempting to sync node with API server" Nov 5 23:43:22.764676 kubelet[3144]: I1105 23:43:22.764661 3144 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:43:22.764696 kubelet[3144]: I1105 23:43:22.764690 3144 kubelet.go:386] "Adding apiserver pod source" Nov 5 23:43:22.764712 kubelet[3144]: I1105 23:43:22.764698 3144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:43:22.765905 kubelet[3144]: E1105 23:43:22.765870 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-7f88f0cba0&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:43:22.766692 kubelet[3144]: I1105 23:43:22.766666 3144 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:43:22.767047 kubelet[3144]: I1105 23:43:22.767029 3144 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:43:22.767092 kubelet[3144]: W1105 23:43:22.767080 3144 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 23:43:22.769937 kubelet[3144]: I1105 23:43:22.769768 3144 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 23:43:22.769937 kubelet[3144]: I1105 23:43:22.769801 3144 server.go:1289] "Started kubelet" Nov 5 23:43:22.772939 kubelet[3144]: E1105 23:43:22.772917 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:43:22.773731 kubelet[3144]: E1105 23:43:22.772963 3144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-7f88f0cba0.187540e9f5246e3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-7f88f0cba0,UID:ci-4459.1.0-n-7f88f0cba0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-7f88f0cba0,},FirstTimestamp:2025-11-05 23:43:22.769780287 +0000 UTC m=+0.417220880,LastTimestamp:2025-11-05 23:43:22.769780287 +0000 UTC m=+0.417220880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-7f88f0cba0,}" Nov 5 23:43:22.774560 kubelet[3144]: I1105 23:43:22.774417 3144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:43:22.775610 kubelet[3144]: E1105 23:43:22.775589 3144 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:43:22.776713 kubelet[3144]: I1105 23:43:22.776538 3144 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:43:22.777128 kubelet[3144]: I1105 23:43:22.777101 3144 server.go:317] "Adding debug handlers to kubelet server" Nov 5 23:43:22.779551 kubelet[3144]: I1105 23:43:22.779502 3144 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:43:22.779739 kubelet[3144]: I1105 23:43:22.779720 3144 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:43:22.779897 kubelet[3144]: I1105 23:43:22.779880 3144 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:43:22.781255 kubelet[3144]: I1105 23:43:22.781235 3144 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 23:43:22.781334 kubelet[3144]: I1105 23:43:22.781322 3144 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 23:43:22.781377 kubelet[3144]: I1105 23:43:22.781367 3144 reconciler.go:26] "Reconciler: start to sync state" Nov 5 23:43:22.781941 kubelet[3144]: E1105 23:43:22.781918 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:43:22.782113 kubelet[3144]: I1105 23:43:22.782092 3144 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:43:22.782238 kubelet[3144]: I1105 23:43:22.782162 3144 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:43:22.783207 kubelet[3144]: I1105 23:43:22.783188 3144 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:43:22.784532 kubelet[3144]: E1105 23:43:22.784217 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:22.788768 kubelet[3144]: E1105 23:43:22.788727 3144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-7f88f0cba0?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Nov 5 23:43:22.797299 kubelet[3144]: I1105 23:43:22.797277 3144 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:43:22.797299 kubelet[3144]: I1105 23:43:22.797293 3144 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:43:22.797385 kubelet[3144]: I1105 23:43:22.797310 3144 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:43:22.884498 kubelet[3144]: E1105 23:43:22.884456 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:22.985628 kubelet[3144]: E1105 23:43:22.985471 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:22.989975 kubelet[3144]: E1105 23:43:22.989943 3144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-7f88f0cba0?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Nov 5 23:43:23.086372 kubelet[3144]: E1105 23:43:23.086337 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.186990 kubelet[3144]: E1105 23:43:23.186954 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.287711 kubelet[3144]: E1105 23:43:23.287593 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.388115 kubelet[3144]: E1105 23:43:23.388071 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.390570 kubelet[3144]: E1105 23:43:23.390536 3144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-7f88f0cba0?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Nov 5 23:43:23.489055 kubelet[3144]: E1105 23:43:23.489015 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.589610 kubelet[3144]: E1105 23:43:23.589566 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.642189 kubelet[3144]: E1105 23:43:23.642128 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-7f88f0cba0&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:43:23.677355 kubelet[3144]: I1105 23:43:23.677328 3144 policy_none.go:49] "None policy: Start" Nov 5 23:43:23.677355 kubelet[3144]: I1105 23:43:23.677359 3144 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 23:43:23.677437 kubelet[3144]: I1105 23:43:23.677371 3144 state_mem.go:35] "Initializing new in-memory state store" Nov 5 23:43:23.689652 kubelet[3144]: E1105 23:43:23.689629 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.694668 kubelet[3144]: I1105 23:43:23.694622 3144 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 23:43:23.695673 kubelet[3144]: I1105 23:43:23.695655 3144 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 23:43:23.695818 kubelet[3144]: I1105 23:43:23.695682 3144 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 23:43:23.695818 kubelet[3144]: I1105 23:43:23.695701 3144 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:43:23.695818 kubelet[3144]: I1105 23:43:23.695707 3144 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 23:43:23.696382 kubelet[3144]: E1105 23:43:23.696358 3144 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:43:23.697844 kubelet[3144]: E1105 23:43:23.697822 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:43:23.729386 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 23:43:23.739489 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 23:43:23.742124 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 23:43:23.751166 kubelet[3144]: E1105 23:43:23.751118 3144 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:43:23.751295 kubelet[3144]: I1105 23:43:23.751280 3144 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:43:23.751329 kubelet[3144]: I1105 23:43:23.751293 3144 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:43:23.751513 kubelet[3144]: I1105 23:43:23.751493 3144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:43:23.753143 kubelet[3144]: E1105 23:43:23.753061 3144 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:43:23.753143 kubelet[3144]: E1105 23:43:23.753117 3144 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:23.853808 kubelet[3144]: I1105 23:43:23.853709 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:23.854290 kubelet[3144]: E1105 23:43:23.854267 3144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:23.887747 kubelet[3144]: I1105 23:43:23.887718 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f0ea61be72df36c37f94d50cd7e753e-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" (UID: \"8f0ea61be72df36c37f94d50cd7e753e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:23.887747 kubelet[3144]: I1105 23:43:23.887747 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f0ea61be72df36c37f94d50cd7e753e-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" (UID: \"8f0ea61be72df36c37f94d50cd7e753e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.069639 kubelet[3144]: I1105 23:43:23.887768 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f0ea61be72df36c37f94d50cd7e753e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" (UID: \"8f0ea61be72df36c37f94d50cd7e753e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.069639 kubelet[3144]: E1105 23:43:23.907120 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:43:24.069639 kubelet[3144]: E1105 23:43:23.937680 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:43:24.069639 kubelet[3144]: I1105 23:43:24.055871 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.069639 kubelet[3144]: E1105 23:43:24.056187 3144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.170069 systemd[1]: Created slice kubepods-burstable-pod8f0ea61be72df36c37f94d50cd7e753e.slice - libcontainer container kubepods-burstable-pod8f0ea61be72df36c37f94d50cd7e753e.slice. Nov 5 23:43:24.177104 kubelet[3144]: E1105 23:43:24.177077 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.178060 containerd[1909]: time="2025-11-05T23:43:24.177864823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-7f88f0cba0,Uid:8f0ea61be72df36c37f94d50cd7e753e,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:24.189329 kubelet[3144]: I1105 23:43:24.189218 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.189329 kubelet[3144]: I1105 23:43:24.189247 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.189329 kubelet[3144]: I1105 23:43:24.189267 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.189329 kubelet[3144]: I1105 23:43:24.189280 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.189329 kubelet[3144]: I1105 23:43:24.189292 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.191947 kubelet[3144]: E1105 23:43:24.191915 3144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-7f88f0cba0?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Nov 5 23:43:24.458738 kubelet[3144]: I1105 23:43:24.458638 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.459279 kubelet[3144]: E1105 23:43:24.459251 3144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:24.714069 kubelet[3144]: E1105 23:43:24.713948 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:43:24.764292 kubelet[3144]: E1105 23:43:24.764245 3144 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 23:43:25.261305 kubelet[3144]: I1105 23:43:25.261254 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:25.261674 kubelet[3144]: E1105 23:43:25.261647 3144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:25.705518 kubelet[3144]: E1105 23:43:25.705474 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:43:25.793195 kubelet[3144]: E1105 23:43:25.793147 3144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-7f88f0cba0?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="3.2s" Nov 5 23:43:29.464227 kubelet[3144]: E1105 23:43:26.320202 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:43:29.464227 kubelet[3144]: E1105 23:43:26.334812 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:43:29.464227 kubelet[3144]: E1105 23:43:26.520392 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-7f88f0cba0&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:43:29.464227 kubelet[3144]: I1105 23:43:26.864499 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:29.464227 kubelet[3144]: E1105 23:43:26.864840 3144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:29.464227 kubelet[3144]: E1105 23:43:28.994262 3144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-7f88f0cba0?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="6.4s" Nov 5 23:43:29.464692 kubelet[3144]: E1105 23:43:29.047405 3144 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 23:43:30.219434 kubelet[3144]: E1105 23:43:29.775289 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:43:30.219434 kubelet[3144]: I1105 23:43:30.067188 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:30.219434 kubelet[3144]: E1105 23:43:30.067535 3144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:30.394098 kubelet[3144]: E1105 23:43:30.394042 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:43:30.523709 systemd[1]: Created slice kubepods-burstable-pod9c3d0137cc1f59201739415849b95363.slice - libcontainer container kubepods-burstable-pod9c3d0137cc1f59201739415849b95363.slice. Nov 5 23:43:30.525901 kubelet[3144]: E1105 23:43:30.525871 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:30.526591 containerd[1909]: time="2025-11-05T23:43:30.526543866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-7f88f0cba0,Uid:9c3d0137cc1f59201739415849b95363,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:30.623987 kubelet[3144]: I1105 23:43:30.623946 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/775e33607b844b9d5a82aea119a3686d-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-7f88f0cba0\" (UID: \"775e33607b844b9d5a82aea119a3686d\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:31.321202 kubelet[3144]: E1105 23:43:31.321158 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-7f88f0cba0&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:43:32.465304 kubelet[3144]: E1105 23:43:31.523606 3144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-7f88f0cba0.187540e9f5246e3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-7f88f0cba0,UID:ci-4459.1.0-n-7f88f0cba0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-7f88f0cba0,},FirstTimestamp:2025-11-05 23:43:22.769780287 +0000 UTC m=+0.417220880,LastTimestamp:2025-11-05 23:43:22.769780287 +0000 UTC m=+0.417220880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-7f88f0cba0,}" Nov 5 23:43:32.465304 kubelet[3144]: E1105 23:43:32.133687 3144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:43:32.478510 systemd[1]: Created slice kubepods-burstable-pod775e33607b844b9d5a82aea119a3686d.slice - libcontainer container kubepods-burstable-pod775e33607b844b9d5a82aea119a3686d.slice. Nov 5 23:43:32.480662 kubelet[3144]: E1105 23:43:32.480632 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:32.481337 containerd[1909]: time="2025-11-05T23:43:32.481302735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-7f88f0cba0,Uid:775e33607b844b9d5a82aea119a3686d,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:32.533623 containerd[1909]: time="2025-11-05T23:43:32.533208496Z" level=info msg="connecting to shim a640ee70f1e2cdacb571782fdc231b568db25a8888c46b4f1f456180bc4069dd" address="unix:///run/containerd/s/fb5383eb3b71c9089f5223f13333c58b3f0fbe7cf77403f57b5cc7c4fca0c87c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:32.549699 containerd[1909]: time="2025-11-05T23:43:32.549661330Z" level=info msg="connecting to shim 7e4249ee45c5b305e877a300ccc4d635b3acfe5e5ef9734a3b21db9c1fd181f7" address="unix:///run/containerd/s/900f191a0b611363c323f0980a19f5227d8d8697ef75c541e8c0819e318f457d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:32.563962 containerd[1909]: time="2025-11-05T23:43:32.563915688Z" level=info msg="connecting to shim eaffb8cf89086015b989d154f1d9ad6456f3551e364b1ce989576f8651db11be" address="unix:///run/containerd/s/51eb33ab570562e61470f491bbef625784c993ba652436d00ba6cb7ad017806d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:32.568855 systemd[1]: Started cri-containerd-a640ee70f1e2cdacb571782fdc231b568db25a8888c46b4f1f456180bc4069dd.scope - libcontainer container a640ee70f1e2cdacb571782fdc231b568db25a8888c46b4f1f456180bc4069dd. Nov 5 23:43:32.582847 systemd[1]: Started cri-containerd-7e4249ee45c5b305e877a300ccc4d635b3acfe5e5ef9734a3b21db9c1fd181f7.scope - libcontainer container 7e4249ee45c5b305e877a300ccc4d635b3acfe5e5ef9734a3b21db9c1fd181f7. Nov 5 23:43:32.594724 systemd[1]: Started cri-containerd-eaffb8cf89086015b989d154f1d9ad6456f3551e364b1ce989576f8651db11be.scope - libcontainer container eaffb8cf89086015b989d154f1d9ad6456f3551e364b1ce989576f8651db11be. Nov 5 23:43:32.634145 containerd[1909]: time="2025-11-05T23:43:32.633949366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-7f88f0cba0,Uid:9c3d0137cc1f59201739415849b95363,Namespace:kube-system,Attempt:0,} returns sandbox id \"a640ee70f1e2cdacb571782fdc231b568db25a8888c46b4f1f456180bc4069dd\"" Nov 5 23:43:32.643042 containerd[1909]: time="2025-11-05T23:43:32.642979108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-7f88f0cba0,Uid:775e33607b844b9d5a82aea119a3686d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaffb8cf89086015b989d154f1d9ad6456f3551e364b1ce989576f8651db11be\"" Nov 5 23:43:32.647338 containerd[1909]: time="2025-11-05T23:43:32.647252930Z" level=info msg="CreateContainer within sandbox \"a640ee70f1e2cdacb571782fdc231b568db25a8888c46b4f1f456180bc4069dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 23:43:32.647802 containerd[1909]: time="2025-11-05T23:43:32.647719595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-7f88f0cba0,Uid:8f0ea61be72df36c37f94d50cd7e753e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e4249ee45c5b305e877a300ccc4d635b3acfe5e5ef9734a3b21db9c1fd181f7\"" Nov 5 23:43:32.652761 containerd[1909]: time="2025-11-05T23:43:32.652416688Z" level=info msg="CreateContainer within sandbox \"eaffb8cf89086015b989d154f1d9ad6456f3551e364b1ce989576f8651db11be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 23:43:32.658626 containerd[1909]: time="2025-11-05T23:43:32.658547823Z" level=info msg="CreateContainer within sandbox \"7e4249ee45c5b305e877a300ccc4d635b3acfe5e5ef9734a3b21db9c1fd181f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 23:43:32.677643 containerd[1909]: time="2025-11-05T23:43:32.677566188Z" level=info msg="Container 7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:32.688610 containerd[1909]: time="2025-11-05T23:43:32.688509429Z" level=info msg="Container f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:32.712288 containerd[1909]: time="2025-11-05T23:43:32.712238439Z" level=info msg="CreateContainer within sandbox \"a640ee70f1e2cdacb571782fdc231b568db25a8888c46b4f1f456180bc4069dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b\"" Nov 5 23:43:32.713635 containerd[1909]: time="2025-11-05T23:43:32.713185888Z" level=info msg="StartContainer for \"7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b\"" Nov 5 23:43:32.713743 containerd[1909]: time="2025-11-05T23:43:32.713717395Z" level=info msg="Container 663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:32.714450 containerd[1909]: time="2025-11-05T23:43:32.714325048Z" level=info msg="connecting to shim 7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b" address="unix:///run/containerd/s/fb5383eb3b71c9089f5223f13333c58b3f0fbe7cf77403f57b5cc7c4fca0c87c" protocol=ttrpc version=3 Nov 5 23:43:32.730459 containerd[1909]: time="2025-11-05T23:43:32.729887315Z" level=info msg="CreateContainer within sandbox \"eaffb8cf89086015b989d154f1d9ad6456f3551e364b1ce989576f8651db11be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1\"" Nov 5 23:43:32.730991 containerd[1909]: time="2025-11-05T23:43:32.730963905Z" level=info msg="StartContainer for \"f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1\"" Nov 5 23:43:32.732032 containerd[1909]: time="2025-11-05T23:43:32.731771094Z" level=info msg="connecting to shim f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1" address="unix:///run/containerd/s/51eb33ab570562e61470f491bbef625784c993ba652436d00ba6cb7ad017806d" protocol=ttrpc version=3 Nov 5 23:43:32.731870 systemd[1]: Started cri-containerd-7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b.scope - libcontainer container 7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b. Nov 5 23:43:32.742046 containerd[1909]: time="2025-11-05T23:43:32.741904122Z" level=info msg="CreateContainer within sandbox \"7e4249ee45c5b305e877a300ccc4d635b3acfe5e5ef9734a3b21db9c1fd181f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d\"" Nov 5 23:43:32.743049 containerd[1909]: time="2025-11-05T23:43:32.743004625Z" level=info msg="StartContainer for \"663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d\"" Nov 5 23:43:32.746409 containerd[1909]: time="2025-11-05T23:43:32.745810971Z" level=info msg="connecting to shim 663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d" address="unix:///run/containerd/s/900f191a0b611363c323f0980a19f5227d8d8697ef75c541e8c0819e318f457d" protocol=ttrpc version=3 Nov 5 23:43:32.749970 systemd[1]: Started cri-containerd-f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1.scope - libcontainer container f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1. Nov 5 23:43:32.763736 systemd[1]: Started cri-containerd-663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d.scope - libcontainer container 663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d. Nov 5 23:43:32.788509 containerd[1909]: time="2025-11-05T23:43:32.788387300Z" level=info msg="StartContainer for \"7e919723093a443c4531e19710ca73b46a95d4778c11c9ded8aa812d5852bd3b\" returns successfully" Nov 5 23:43:32.833315 containerd[1909]: time="2025-11-05T23:43:32.833221284Z" level=info msg="StartContainer for \"f1d9b537f767f379e6e26b7249d9fd313801e3e79aedcb04a7470e7670dd59e1\" returns successfully" Nov 5 23:43:32.834736 containerd[1909]: time="2025-11-05T23:43:32.834712752Z" level=info msg="StartContainer for \"663e9b96633615b2a2a1103aa164197ab017f682ee35e5529c4f5a5eed78d05d\" returns successfully" Nov 5 23:43:33.718154 kubelet[3144]: E1105 23:43:33.718115 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:33.723666 kubelet[3144]: E1105 23:43:33.722832 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:33.725031 kubelet[3144]: E1105 23:43:33.725010 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:33.753767 kubelet[3144]: E1105 23:43:33.753721 3144 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:34.727656 kubelet[3144]: E1105 23:43:34.726666 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:34.727656 kubelet[3144]: E1105 23:43:34.726824 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:34.727656 kubelet[3144]: E1105 23:43:34.726958 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:34.732359 kubelet[3144]: E1105 23:43:34.732326 3144 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-n-7f88f0cba0" not found Nov 5 23:43:35.117367 kubelet[3144]: E1105 23:43:35.117268 3144 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-n-7f88f0cba0" not found Nov 5 23:43:35.398043 kubelet[3144]: E1105 23:43:35.397917 3144 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:35.554568 kubelet[3144]: E1105 23:43:35.554533 3144 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-n-7f88f0cba0" not found Nov 5 23:43:35.729418 kubelet[3144]: E1105 23:43:35.729115 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:35.730434 kubelet[3144]: E1105 23:43:35.729943 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:35.769382 kubelet[3144]: E1105 23:43:35.769324 3144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:36.457433 kubelet[3144]: E1105 23:43:36.457387 3144 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-n-7f88f0cba0" not found Nov 5 23:43:36.469932 kubelet[3144]: I1105 23:43:36.469896 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:36.476875 kubelet[3144]: I1105 23:43:36.476833 3144 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:36.476875 kubelet[3144]: E1105 23:43:36.476874 3144 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-7f88f0cba0\": node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.483709 kubelet[3144]: E1105 23:43:36.483668 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.584124 kubelet[3144]: E1105 23:43:36.584076 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.684523 kubelet[3144]: E1105 23:43:36.684489 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.784991 kubelet[3144]: E1105 23:43:36.784857 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.885435 kubelet[3144]: E1105 23:43:36.885388 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.986210 kubelet[3144]: E1105 23:43:36.986156 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:36.995507 systemd[1]: Reload requested from client PID 3428 ('systemctl') (unit session-9.scope)... Nov 5 23:43:36.995521 systemd[1]: Reloading... Nov 5 23:43:37.076064 zram_generator::config[3475]: No configuration found. Nov 5 23:43:37.089198 kubelet[3144]: E1105 23:43:37.088882 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:37.190001 kubelet[3144]: E1105 23:43:37.189946 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-7f88f0cba0\" not found" Nov 5 23:43:37.247796 systemd[1]: Reloading finished in 251 ms. Nov 5 23:43:37.267869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:37.280618 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 23:43:37.281019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:37.281083 systemd[1]: kubelet.service: Consumed 697ms CPU time, 126.1M memory peak. Nov 5 23:43:37.283081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:37.533744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:37.537487 (kubelet)[3539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:43:37.574029 kubelet[3539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:43:37.574588 kubelet[3539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:43:37.574588 kubelet[3539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:43:37.574684 kubelet[3539]: I1105 23:43:37.574640 3539 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:43:37.582751 kubelet[3539]: I1105 23:43:37.582700 3539 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 23:43:37.582751 kubelet[3539]: I1105 23:43:37.582740 3539 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:43:37.582991 kubelet[3539]: I1105 23:43:37.582971 3539 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:43:37.584055 kubelet[3539]: I1105 23:43:37.584034 3539 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 23:43:37.586794 kubelet[3539]: I1105 23:43:37.586609 3539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:43:37.592414 kubelet[3539]: I1105 23:43:37.592219 3539 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:43:37.595661 kubelet[3539]: I1105 23:43:37.595631 3539 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 23:43:37.595884 kubelet[3539]: I1105 23:43:37.595847 3539 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:43:37.596045 kubelet[3539]: I1105 23:43:37.595879 3539 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-7f88f0cba0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:43:37.596109 kubelet[3539]: I1105 23:43:37.596053 3539 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:43:37.596109 kubelet[3539]: I1105 23:43:37.596060 3539 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 23:43:37.596109 kubelet[3539]: I1105 23:43:37.596103 3539 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:43:37.596330 kubelet[3539]: I1105 23:43:37.596250 3539 kubelet.go:480] "Attempting to sync node with API server" Nov 5 23:43:37.596330 kubelet[3539]: I1105 23:43:37.596264 3539 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:43:37.596330 kubelet[3539]: I1105 23:43:37.596290 3539 kubelet.go:386] "Adding apiserver pod source" Nov 5 23:43:37.596330 kubelet[3539]: I1105 23:43:37.596300 3539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:43:37.607561 kubelet[3539]: I1105 23:43:37.607532 3539 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:43:37.608606 kubelet[3539]: I1105 23:43:37.608434 3539 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:43:37.613168 kubelet[3539]: I1105 23:43:37.613104 3539 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 23:43:37.613703 kubelet[3539]: I1105 23:43:37.613681 3539 server.go:1289] "Started kubelet" Nov 5 23:43:37.615754 kubelet[3539]: I1105 23:43:37.615730 3539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:43:37.621368 kubelet[3539]: I1105 23:43:37.621250 3539 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 23:43:37.623514 kubelet[3539]: I1105 23:43:37.623477 3539 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:43:37.624603 kubelet[3539]: I1105 23:43:37.624233 3539 server.go:317] "Adding debug handlers to kubelet server" Nov 5 23:43:37.625054 kubelet[3539]: I1105 23:43:37.625039 3539 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 23:43:37.628237 kubelet[3539]: I1105 23:43:37.628184 3539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:43:37.628400 kubelet[3539]: I1105 23:43:37.628380 3539 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:43:37.628600 kubelet[3539]: I1105 23:43:37.628554 3539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:43:37.630736 kubelet[3539]: I1105 23:43:37.630718 3539 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 23:43:37.630821 kubelet[3539]: I1105 23:43:37.630750 3539 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:43:37.630971 kubelet[3539]: I1105 23:43:37.630945 3539 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:43:37.631650 kubelet[3539]: I1105 23:43:37.631338 3539 reconciler.go:26] "Reconciler: start to sync state" Nov 5 23:43:37.633667 kubelet[3539]: I1105 23:43:37.633639 3539 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 23:43:37.633759 kubelet[3539]: I1105 23:43:37.633750 3539 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 23:43:37.633811 kubelet[3539]: I1105 23:43:37.633802 3539 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:43:37.633855 kubelet[3539]: I1105 23:43:37.633848 3539 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 23:43:37.635194 kubelet[3539]: E1105 23:43:37.634260 3539 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:43:37.637615 kubelet[3539]: E1105 23:43:37.637537 3539 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:43:37.638890 kubelet[3539]: I1105 23:43:37.638875 3539 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:43:37.686201 kubelet[3539]: I1105 23:43:37.686173 3539 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:43:37.686201 kubelet[3539]: I1105 23:43:37.686190 3539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:43:37.686201 kubelet[3539]: I1105 23:43:37.686210 3539 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:43:37.686383 kubelet[3539]: I1105 23:43:37.686321 3539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 23:43:37.686383 kubelet[3539]: I1105 23:43:37.686328 3539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 23:43:37.686383 kubelet[3539]: I1105 23:43:37.686343 3539 policy_none.go:49] "None policy: Start" Nov 5 23:43:37.686383 kubelet[3539]: I1105 23:43:37.686350 3539 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 23:43:37.686383 kubelet[3539]: I1105 23:43:37.686358 3539 state_mem.go:35] "Initializing new in-memory state store" Nov 5 23:43:37.686449 kubelet[3539]: I1105 23:43:37.686422 3539 state_mem.go:75] "Updated machine memory state" Nov 5 23:43:37.689892 kubelet[3539]: E1105 23:43:37.689868 3539 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:43:37.690041 kubelet[3539]: I1105 23:43:37.690025 3539 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:43:37.690068 kubelet[3539]: I1105 23:43:37.690040 3539 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:43:37.690494 kubelet[3539]: I1105 23:43:37.690430 3539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:43:37.693613 kubelet[3539]: E1105 23:43:37.693595 3539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:43:37.735336 kubelet[3539]: I1105 23:43:37.735291 3539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.736084 kubelet[3539]: I1105 23:43:37.735746 3539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.736084 kubelet[3539]: I1105 23:43:37.736062 3539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.747994 kubelet[3539]: I1105 23:43:37.747956 3539 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 23:43:37.752326 kubelet[3539]: I1105 23:43:37.752220 3539 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 23:43:37.752696 kubelet[3539]: I1105 23:43:37.752296 3539 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 23:43:37.801399 kubelet[3539]: I1105 23:43:37.801360 3539 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.818234 kubelet[3539]: I1105 23:43:37.817851 3539 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.818234 kubelet[3539]: I1105 23:43:37.817966 3539 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.832280 kubelet[3539]: I1105 23:43:37.832246 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.832280 kubelet[3539]: I1105 23:43:37.832278 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f0ea61be72df36c37f94d50cd7e753e-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" (UID: \"8f0ea61be72df36c37f94d50cd7e753e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.832280 kubelet[3539]: I1105 23:43:37.832291 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f0ea61be72df36c37f94d50cd7e753e-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" (UID: \"8f0ea61be72df36c37f94d50cd7e753e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.835632 kubelet[3539]: I1105 23:43:37.832301 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f0ea61be72df36c37f94d50cd7e753e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" (UID: \"8f0ea61be72df36c37f94d50cd7e753e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.835632 kubelet[3539]: I1105 23:43:37.832323 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.835632 kubelet[3539]: I1105 23:43:37.832331 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.835632 kubelet[3539]: I1105 23:43:37.832340 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/775e33607b844b9d5a82aea119a3686d-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-7f88f0cba0\" (UID: \"775e33607b844b9d5a82aea119a3686d\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.835632 kubelet[3539]: I1105 23:43:37.832349 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:37.835724 kubelet[3539]: I1105 23:43:37.832358 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c3d0137cc1f59201739415849b95363-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-7f88f0cba0\" (UID: \"9c3d0137cc1f59201739415849b95363\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:38.605923 kubelet[3539]: I1105 23:43:38.605638 3539 apiserver.go:52] "Watching apiserver" Nov 5 23:43:38.631615 kubelet[3539]: I1105 23:43:38.631528 3539 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 23:43:38.666731 kubelet[3539]: I1105 23:43:38.664736 3539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:38.683460 kubelet[3539]: I1105 23:43:38.683416 3539 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 23:43:38.683623 kubelet[3539]: E1105 23:43:38.683496 3539 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-7f88f0cba0\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" Nov 5 23:43:38.695153 kubelet[3539]: I1105 23:43:38.694836 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-7f88f0cba0" podStartSLOduration=1.694818229 podStartE2EDuration="1.694818229s" podCreationTimestamp="2025-11-05 23:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:43:38.684326946 +0000 UTC m=+1.142681701" watchObservedRunningTime="2025-11-05 23:43:38.694818229 +0000 UTC m=+1.153172984" Nov 5 23:43:38.705806 kubelet[3539]: I1105 23:43:38.705673 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-7f88f0cba0" podStartSLOduration=1.705654491 podStartE2EDuration="1.705654491s" podCreationTimestamp="2025-11-05 23:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:43:38.695024292 +0000 UTC m=+1.153379055" watchObservedRunningTime="2025-11-05 23:43:38.705654491 +0000 UTC m=+1.164009246" Nov 5 23:43:38.706147 kubelet[3539]: I1105 23:43:38.705911 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-7f88f0cba0" podStartSLOduration=1.705902636 podStartE2EDuration="1.705902636s" podCreationTimestamp="2025-11-05 23:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:43:38.70560237 +0000 UTC m=+1.163957165" watchObservedRunningTime="2025-11-05 23:43:38.705902636 +0000 UTC m=+1.164257423" Nov 5 23:43:43.488675 kubelet[3539]: I1105 23:43:43.488639 3539 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 23:43:43.489164 kubelet[3539]: I1105 23:43:43.489056 3539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 23:43:43.489190 containerd[1909]: time="2025-11-05T23:43:43.488900095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 23:43:44.440707 systemd[1]: Created slice kubepods-besteffort-podccf352f0_1c69_46cc_b228_6259f567808c.slice - libcontainer container kubepods-besteffort-podccf352f0_1c69_46cc_b228_6259f567808c.slice. Nov 5 23:43:44.474146 kubelet[3539]: I1105 23:43:44.474094 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf352f0-1c69-46cc-b228-6259f567808c-xtables-lock\") pod \"kube-proxy-pzdp7\" (UID: \"ccf352f0-1c69-46cc-b228-6259f567808c\") " pod="kube-system/kube-proxy-pzdp7" Nov 5 23:43:44.474146 kubelet[3539]: I1105 23:43:44.474140 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf352f0-1c69-46cc-b228-6259f567808c-lib-modules\") pod \"kube-proxy-pzdp7\" (UID: \"ccf352f0-1c69-46cc-b228-6259f567808c\") " pod="kube-system/kube-proxy-pzdp7" Nov 5 23:43:44.474146 kubelet[3539]: I1105 23:43:44.474155 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8r8\" (UniqueName: \"kubernetes.io/projected/ccf352f0-1c69-46cc-b228-6259f567808c-kube-api-access-2j8r8\") pod \"kube-proxy-pzdp7\" (UID: \"ccf352f0-1c69-46cc-b228-6259f567808c\") " pod="kube-system/kube-proxy-pzdp7" Nov 5 23:43:44.474333 kubelet[3539]: I1105 23:43:44.474170 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccf352f0-1c69-46cc-b228-6259f567808c-kube-proxy\") pod \"kube-proxy-pzdp7\" (UID: \"ccf352f0-1c69-46cc-b228-6259f567808c\") " pod="kube-system/kube-proxy-pzdp7" Nov 5 23:43:44.755598 containerd[1909]: time="2025-11-05T23:43:44.755429046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzdp7,Uid:ccf352f0-1c69-46cc-b228-6259f567808c,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:44.775930 kubelet[3539]: I1105 23:43:44.775251 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdgl\" (UniqueName: \"kubernetes.io/projected/9d216a38-d08f-4e8b-981e-db90d6a4931a-kube-api-access-wwdgl\") pod \"tigera-operator-7dcd859c48-p9gk4\" (UID: \"9d216a38-d08f-4e8b-981e-db90d6a4931a\") " pod="tigera-operator/tigera-operator-7dcd859c48-p9gk4" Nov 5 23:43:44.775930 kubelet[3539]: I1105 23:43:44.775292 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d216a38-d08f-4e8b-981e-db90d6a4931a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p9gk4\" (UID: \"9d216a38-d08f-4e8b-981e-db90d6a4931a\") " pod="tigera-operator/tigera-operator-7dcd859c48-p9gk4" Nov 5 23:43:44.775601 systemd[1]: Created slice kubepods-besteffort-pod9d216a38_d08f_4e8b_981e_db90d6a4931a.slice - libcontainer container kubepods-besteffort-pod9d216a38_d08f_4e8b_981e_db90d6a4931a.slice. Nov 5 23:43:45.078049 containerd[1909]: time="2025-11-05T23:43:45.077991548Z" level=info msg="connecting to shim abef9a560a89ba4cffa815901d0d9d8914db30afd89d2a35577c3fe213b9e86d" address="unix:///run/containerd/s/ad1eec85ab7892504e4c76c28ec512d4268312fc00d5d8f1464be1333150456e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:45.078841 containerd[1909]: time="2025-11-05T23:43:45.078817784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p9gk4,Uid:9d216a38-d08f-4e8b-981e-db90d6a4931a,Namespace:tigera-operator,Attempt:0,}" Nov 5 23:43:45.103767 systemd[1]: Started cri-containerd-abef9a560a89ba4cffa815901d0d9d8914db30afd89d2a35577c3fe213b9e86d.scope - libcontainer container abef9a560a89ba4cffa815901d0d9d8914db30afd89d2a35577c3fe213b9e86d. Nov 5 23:43:45.177958 containerd[1909]: time="2025-11-05T23:43:45.177836150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzdp7,Uid:ccf352f0-1c69-46cc-b228-6259f567808c,Namespace:kube-system,Attempt:0,} returns sandbox id \"abef9a560a89ba4cffa815901d0d9d8914db30afd89d2a35577c3fe213b9e86d\"" Nov 5 23:43:45.228753 containerd[1909]: time="2025-11-05T23:43:45.228348290Z" level=info msg="CreateContainer within sandbox \"abef9a560a89ba4cffa815901d0d9d8914db30afd89d2a35577c3fe213b9e86d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 23:43:45.627133 containerd[1909]: time="2025-11-05T23:43:45.627087086Z" level=info msg="Container 991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:45.789329 containerd[1909]: time="2025-11-05T23:43:45.789191532Z" level=info msg="connecting to shim 3b893e6f401cda4cac087cfbf21dc79450d235398bcc0f6cffe1012da685d6c0" address="unix:///run/containerd/s/74c371be6b95d6eacb51078f683791abdff0120d3578a9010a8a2a8a7472dfb7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:45.806730 systemd[1]: Started cri-containerd-3b893e6f401cda4cac087cfbf21dc79450d235398bcc0f6cffe1012da685d6c0.scope - libcontainer container 3b893e6f401cda4cac087cfbf21dc79450d235398bcc0f6cffe1012da685d6c0. Nov 5 23:43:45.834062 containerd[1909]: time="2025-11-05T23:43:45.834015032Z" level=info msg="CreateContainer within sandbox \"abef9a560a89ba4cffa815901d0d9d8914db30afd89d2a35577c3fe213b9e86d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e\"" Nov 5 23:43:45.835183 containerd[1909]: time="2025-11-05T23:43:45.835150063Z" level=info msg="StartContainer for \"991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e\"" Nov 5 23:43:45.837720 containerd[1909]: time="2025-11-05T23:43:45.837687141Z" level=info msg="connecting to shim 991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e" address="unix:///run/containerd/s/ad1eec85ab7892504e4c76c28ec512d4268312fc00d5d8f1464be1333150456e" protocol=ttrpc version=3 Nov 5 23:43:45.857806 systemd[1]: Started cri-containerd-991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e.scope - libcontainer container 991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e. Nov 5 23:43:45.876455 containerd[1909]: time="2025-11-05T23:43:45.876297093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p9gk4,Uid:9d216a38-d08f-4e8b-981e-db90d6a4931a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b893e6f401cda4cac087cfbf21dc79450d235398bcc0f6cffe1012da685d6c0\"" Nov 5 23:43:45.878627 containerd[1909]: time="2025-11-05T23:43:45.878496336Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 23:43:45.924190 containerd[1909]: time="2025-11-05T23:43:45.923986883Z" level=info msg="StartContainer for \"991bd2fe0949232ebe186970a9642b0c67b64136c4901bdfeac7b5844790f91e\" returns successfully" Nov 5 23:43:49.403686 kubelet[3539]: I1105 23:43:49.403280 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzdp7" podStartSLOduration=5.403266743 podStartE2EDuration="5.403266743s" podCreationTimestamp="2025-11-05 23:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:43:46.687330944 +0000 UTC m=+9.145685699" watchObservedRunningTime="2025-11-05 23:43:49.403266743 +0000 UTC m=+11.861621506" Nov 5 23:43:49.853982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416715940.mount: Deactivated successfully. Nov 5 23:43:51.568445 containerd[1909]: time="2025-11-05T23:43:51.567950811Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:51.570771 containerd[1909]: time="2025-11-05T23:43:51.570737564Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 23:43:51.632598 containerd[1909]: time="2025-11-05T23:43:51.632191835Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:51.678383 containerd[1909]: time="2025-11-05T23:43:51.678337325Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:51.679260 containerd[1909]: time="2025-11-05T23:43:51.679226082Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.800699745s" Nov 5 23:43:51.679260 containerd[1909]: time="2025-11-05T23:43:51.679258691Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 23:43:51.726594 containerd[1909]: time="2025-11-05T23:43:51.726539563Z" level=info msg="CreateContainer within sandbox \"3b893e6f401cda4cac087cfbf21dc79450d235398bcc0f6cffe1012da685d6c0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 23:43:51.871646 containerd[1909]: time="2025-11-05T23:43:51.870468245Z" level=info msg="Container 60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:52.028102 containerd[1909]: time="2025-11-05T23:43:52.028000140Z" level=info msg="CreateContainer within sandbox \"3b893e6f401cda4cac087cfbf21dc79450d235398bcc0f6cffe1012da685d6c0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e\"" Nov 5 23:43:52.028594 containerd[1909]: time="2025-11-05T23:43:52.028493524Z" level=info msg="StartContainer for \"60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e\"" Nov 5 23:43:52.029457 containerd[1909]: time="2025-11-05T23:43:52.029423762Z" level=info msg="connecting to shim 60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e" address="unix:///run/containerd/s/74c371be6b95d6eacb51078f683791abdff0120d3578a9010a8a2a8a7472dfb7" protocol=ttrpc version=3 Nov 5 23:43:52.052764 systemd[1]: Started cri-containerd-60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e.scope - libcontainer container 60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e. Nov 5 23:43:52.079204 containerd[1909]: time="2025-11-05T23:43:52.079013532Z" level=info msg="StartContainer for \"60fc6720822aa4d30bda488aa5e8379086f03293f0164812566426228ddaa24e\" returns successfully" Nov 5 23:43:52.704598 kubelet[3539]: I1105 23:43:52.704423 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p9gk4" podStartSLOduration=2.902237366 podStartE2EDuration="8.704343645s" podCreationTimestamp="2025-11-05 23:43:44 +0000 UTC" firstStartedPulling="2025-11-05 23:43:45.878011296 +0000 UTC m=+8.336366051" lastFinishedPulling="2025-11-05 23:43:51.680117567 +0000 UTC m=+14.138472330" observedRunningTime="2025-11-05 23:43:52.703652831 +0000 UTC m=+15.162007610" watchObservedRunningTime="2025-11-05 23:43:52.704343645 +0000 UTC m=+15.162698400" Nov 5 23:43:57.356553 sudo[2370]: pam_unix(sudo:session): session closed for user root Nov 5 23:43:57.421613 sshd[2369]: Connection closed by 10.200.16.10 port 55212 Nov 5 23:43:57.423439 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Nov 5 23:43:57.429895 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:55212.service: Deactivated successfully. Nov 5 23:43:57.433069 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 23:43:57.433319 systemd[1]: session-9.scope: Consumed 4.184s CPU time, 224.4M memory peak. Nov 5 23:43:57.435123 systemd-logind[1867]: Session 9 logged out. Waiting for processes to exit. Nov 5 23:43:57.437839 systemd-logind[1867]: Removed session 9. Nov 5 23:44:03.892950 systemd[1]: Created slice kubepods-besteffort-pod192195f2_c55c_48c7_a985_35dbc348d119.slice - libcontainer container kubepods-besteffort-pod192195f2_c55c_48c7_a985_35dbc348d119.slice. Nov 5 23:44:03.985999 kubelet[3539]: I1105 23:44:03.985950 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4kbl\" (UniqueName: \"kubernetes.io/projected/192195f2-c55c-48c7-a985-35dbc348d119-kube-api-access-f4kbl\") pod \"calico-typha-5c5b757c75-sb5cs\" (UID: \"192195f2-c55c-48c7-a985-35dbc348d119\") " pod="calico-system/calico-typha-5c5b757c75-sb5cs" Nov 5 23:44:03.986522 kubelet[3539]: I1105 23:44:03.986074 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/192195f2-c55c-48c7-a985-35dbc348d119-tigera-ca-bundle\") pod \"calico-typha-5c5b757c75-sb5cs\" (UID: \"192195f2-c55c-48c7-a985-35dbc348d119\") " pod="calico-system/calico-typha-5c5b757c75-sb5cs" Nov 5 23:44:03.986522 kubelet[3539]: I1105 23:44:03.986112 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/192195f2-c55c-48c7-a985-35dbc348d119-typha-certs\") pod \"calico-typha-5c5b757c75-sb5cs\" (UID: \"192195f2-c55c-48c7-a985-35dbc348d119\") " pod="calico-system/calico-typha-5c5b757c75-sb5cs" Nov 5 23:44:04.125424 systemd[1]: Created slice kubepods-besteffort-poda0c2d129_48a8_4018_b1c0_8b1b53f92e26.slice - libcontainer container kubepods-besteffort-poda0c2d129_48a8_4018_b1c0_8b1b53f92e26.slice. Nov 5 23:44:04.187086 kubelet[3539]: I1105 23:44:04.186853 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-lib-modules\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187086 kubelet[3539]: I1105 23:44:04.186893 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-xtables-lock\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187086 kubelet[3539]: I1105 23:44:04.186907 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-flexvol-driver-host\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187086 kubelet[3539]: I1105 23:44:04.186918 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-cni-net-dir\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187086 kubelet[3539]: I1105 23:44:04.186929 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-node-certs\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187305 kubelet[3539]: I1105 23:44:04.186938 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-policysync\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187305 kubelet[3539]: I1105 23:44:04.186955 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-var-lib-calico\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187305 kubelet[3539]: I1105 23:44:04.186965 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flv8q\" (UniqueName: \"kubernetes.io/projected/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-kube-api-access-flv8q\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187305 kubelet[3539]: I1105 23:44:04.186976 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-cni-bin-dir\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187305 kubelet[3539]: I1105 23:44:04.187002 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-cni-log-dir\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187420 kubelet[3539]: I1105 23:44:04.187037 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-tigera-ca-bundle\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.187420 kubelet[3539]: I1105 23:44:04.187050 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a0c2d129-48a8-4018-b1c0-8b1b53f92e26-var-run-calico\") pod \"calico-node-z5pzf\" (UID: \"a0c2d129-48a8-4018-b1c0-8b1b53f92e26\") " pod="calico-system/calico-node-z5pzf" Nov 5 23:44:04.196421 containerd[1909]: time="2025-11-05T23:44:04.196385268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c5b757c75-sb5cs,Uid:192195f2-c55c-48c7-a985-35dbc348d119,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:04.230705 containerd[1909]: time="2025-11-05T23:44:04.230487855Z" level=info msg="connecting to shim c053ed9243bc4f14299da41d95ab1893d23000df96e01fb6bcce82b7ebc33093" address="unix:///run/containerd/s/bee2d6992e07450921242a88dba0189e8de52b5bd0b3ca09d1d30350872b05d8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:04.252900 systemd[1]: Started cri-containerd-c053ed9243bc4f14299da41d95ab1893d23000df96e01fb6bcce82b7ebc33093.scope - libcontainer container c053ed9243bc4f14299da41d95ab1893d23000df96e01fb6bcce82b7ebc33093. Nov 5 23:44:04.290598 kubelet[3539]: E1105 23:44:04.290358 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.290598 kubelet[3539]: W1105 23:44:04.290383 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.290598 kubelet[3539]: E1105 23:44:04.290510 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.291457 kubelet[3539]: E1105 23:44:04.290884 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.291457 kubelet[3539]: W1105 23:44:04.291008 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.291457 kubelet[3539]: E1105 23:44:04.291030 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.291457 kubelet[3539]: E1105 23:44:04.291409 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.291457 kubelet[3539]: W1105 23:44:04.291421 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.291457 kubelet[3539]: E1105 23:44:04.291431 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.291935 kubelet[3539]: E1105 23:44:04.291914 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.291935 kubelet[3539]: W1105 23:44:04.291930 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.292004 kubelet[3539]: E1105 23:44:04.291961 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.294626 kubelet[3539]: E1105 23:44:04.293709 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.294626 kubelet[3539]: W1105 23:44:04.293726 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.295637 kubelet[3539]: E1105 23:44:04.295616 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.298272 kubelet[3539]: E1105 23:44:04.297622 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.298272 kubelet[3539]: W1105 23:44:04.298044 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.298272 kubelet[3539]: E1105 23:44:04.298059 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.298820 kubelet[3539]: E1105 23:44:04.298806 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.298926 kubelet[3539]: W1105 23:44:04.298915 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.299098 kubelet[3539]: E1105 23:44:04.299036 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.300364 kubelet[3539]: E1105 23:44:04.300350 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.300461 kubelet[3539]: W1105 23:44:04.300449 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.300533 kubelet[3539]: E1105 23:44:04.300506 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.301056 kubelet[3539]: E1105 23:44:04.300980 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.301493 kubelet[3539]: W1105 23:44:04.301412 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.301493 kubelet[3539]: E1105 23:44:04.301430 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.301917 kubelet[3539]: E1105 23:44:04.301828 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.302304 kubelet[3539]: W1105 23:44:04.302161 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.302304 kubelet[3539]: E1105 23:44:04.302180 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.303481 kubelet[3539]: E1105 23:44:04.303352 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.304594 containerd[1909]: time="2025-11-05T23:44:04.304093189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c5b757c75-sb5cs,Uid:192195f2-c55c-48c7-a985-35dbc348d119,Namespace:calico-system,Attempt:0,} returns sandbox id \"c053ed9243bc4f14299da41d95ab1893d23000df96e01fb6bcce82b7ebc33093\"" Nov 5 23:44:04.304719 kubelet[3539]: W1105 23:44:04.304699 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.305668 kubelet[3539]: E1105 23:44:04.305653 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.305902 kubelet[3539]: E1105 23:44:04.305890 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.306041 kubelet[3539]: W1105 23:44:04.305932 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.306041 kubelet[3539]: E1105 23:44:04.305944 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.306248 kubelet[3539]: E1105 23:44:04.306170 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.306248 kubelet[3539]: W1105 23:44:04.306181 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.306248 kubelet[3539]: E1105 23:44:04.306190 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.306510 kubelet[3539]: E1105 23:44:04.306441 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.306510 kubelet[3539]: W1105 23:44:04.306451 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.306510 kubelet[3539]: E1105 23:44:04.306459 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.307622 kubelet[3539]: E1105 23:44:04.307133 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.307622 kubelet[3539]: W1105 23:44:04.307146 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.307622 kubelet[3539]: E1105 23:44:04.307156 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.308913 kubelet[3539]: E1105 23:44:04.307874 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.308913 kubelet[3539]: W1105 23:44:04.308844 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.308913 kubelet[3539]: E1105 23:44:04.308860 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.310074 containerd[1909]: time="2025-11-05T23:44:04.310052807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 23:44:04.310830 kubelet[3539]: E1105 23:44:04.310140 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.310830 kubelet[3539]: W1105 23:44:04.310151 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.310830 kubelet[3539]: E1105 23:44:04.310161 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.310830 kubelet[3539]: E1105 23:44:04.310616 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.310830 kubelet[3539]: W1105 23:44:04.310627 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.310830 kubelet[3539]: E1105 23:44:04.310638 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.311817 kubelet[3539]: E1105 23:44:04.311795 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.312117 kubelet[3539]: W1105 23:44:04.311937 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.312117 kubelet[3539]: E1105 23:44:04.311954 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.313143 kubelet[3539]: E1105 23:44:04.312564 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:04.313912 kubelet[3539]: E1105 23:44:04.313849 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.313912 kubelet[3539]: W1105 23:44:04.313871 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.313912 kubelet[3539]: E1105 23:44:04.313882 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.314616 kubelet[3539]: E1105 23:44:04.314393 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.314616 kubelet[3539]: W1105 23:44:04.314408 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.314616 kubelet[3539]: E1105 23:44:04.314419 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.314709 kubelet[3539]: E1105 23:44:04.314683 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.314709 kubelet[3539]: W1105 23:44:04.314693 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.314709 kubelet[3539]: E1105 23:44:04.314704 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.316592 kubelet[3539]: E1105 23:44:04.314868 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.316592 kubelet[3539]: W1105 23:44:04.314882 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.316592 kubelet[3539]: E1105 23:44:04.314891 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.316592 kubelet[3539]: E1105 23:44:04.315246 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.316592 kubelet[3539]: W1105 23:44:04.315256 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.316592 kubelet[3539]: E1105 23:44:04.315267 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.316592 kubelet[3539]: E1105 23:44:04.316425 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.316592 kubelet[3539]: W1105 23:44:04.316434 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.316592 kubelet[3539]: E1105 23:44:04.316443 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.316776 kubelet[3539]: E1105 23:44:04.316615 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.316776 kubelet[3539]: W1105 23:44:04.316622 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.316776 kubelet[3539]: E1105 23:44:04.316629 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.322489 kubelet[3539]: E1105 23:44:04.322454 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.322489 kubelet[3539]: W1105 23:44:04.322466 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.322753 kubelet[3539]: E1105 23:44:04.322673 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.379910 kubelet[3539]: E1105 23:44:04.379869 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.379910 kubelet[3539]: W1105 23:44:04.379889 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.380089 kubelet[3539]: E1105 23:44:04.380021 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.380289 kubelet[3539]: E1105 23:44:04.380246 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.380426 kubelet[3539]: W1105 23:44:04.380257 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.380426 kubelet[3539]: E1105 23:44:04.380365 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.380608 kubelet[3539]: E1105 23:44:04.380597 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.380751 kubelet[3539]: W1105 23:44:04.380654 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.380751 kubelet[3539]: E1105 23:44:04.380667 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.381036 kubelet[3539]: E1105 23:44:04.380988 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.381036 kubelet[3539]: W1105 23:44:04.381001 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.381036 kubelet[3539]: E1105 23:44:04.381011 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.381931 kubelet[3539]: E1105 23:44:04.381824 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.381931 kubelet[3539]: W1105 23:44:04.381863 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.381931 kubelet[3539]: E1105 23:44:04.381873 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.382177 kubelet[3539]: E1105 23:44:04.382164 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.382318 kubelet[3539]: W1105 23:44:04.382202 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.382318 kubelet[3539]: E1105 23:44:04.382214 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.382450 kubelet[3539]: E1105 23:44:04.382439 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.382504 kubelet[3539]: W1105 23:44:04.382495 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.382725 kubelet[3539]: E1105 23:44:04.382604 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.383095 kubelet[3539]: E1105 23:44:04.383029 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.383095 kubelet[3539]: W1105 23:44:04.383041 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.383095 kubelet[3539]: E1105 23:44:04.383051 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.383762 kubelet[3539]: E1105 23:44:04.383735 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.384330 kubelet[3539]: W1105 23:44:04.384259 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.384330 kubelet[3539]: E1105 23:44:04.384284 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.384581 kubelet[3539]: E1105 23:44:04.384551 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.384581 kubelet[3539]: W1105 23:44:04.384562 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.384741 kubelet[3539]: E1105 23:44:04.384667 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.384942 kubelet[3539]: E1105 23:44:04.384878 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.384942 kubelet[3539]: W1105 23:44:04.384889 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.384942 kubelet[3539]: E1105 23:44:04.384897 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.385510 kubelet[3539]: E1105 23:44:04.385223 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.385510 kubelet[3539]: W1105 23:44:04.385287 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.385510 kubelet[3539]: E1105 23:44:04.385298 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.385903 kubelet[3539]: E1105 23:44:04.385772 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.385903 kubelet[3539]: W1105 23:44:04.385789 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.385903 kubelet[3539]: E1105 23:44:04.385799 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.386147 kubelet[3539]: E1105 23:44:04.386111 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.386299 kubelet[3539]: W1105 23:44:04.386285 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.386527 kubelet[3539]: E1105 23:44:04.386421 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.387338 kubelet[3539]: E1105 23:44:04.386838 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.387473 kubelet[3539]: W1105 23:44:04.387456 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.387543 kubelet[3539]: E1105 23:44:04.387534 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.388697 kubelet[3539]: E1105 23:44:04.388683 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.388779 kubelet[3539]: W1105 23:44:04.388768 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.388915 kubelet[3539]: E1105 23:44:04.388821 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.389015 kubelet[3539]: E1105 23:44:04.389003 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.389073 kubelet[3539]: W1105 23:44:04.389063 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.389187 kubelet[3539]: E1105 23:44:04.389115 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.389279 kubelet[3539]: E1105 23:44:04.389268 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.389351 kubelet[3539]: W1105 23:44:04.389317 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.389351 kubelet[3539]: E1105 23:44:04.389330 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.389663 kubelet[3539]: E1105 23:44:04.389543 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.389663 kubelet[3539]: W1105 23:44:04.389554 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.389663 kubelet[3539]: E1105 23:44:04.389562 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.390140 kubelet[3539]: E1105 23:44:04.390113 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.390359 kubelet[3539]: W1105 23:44:04.390185 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.390359 kubelet[3539]: E1105 23:44:04.390201 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.390489 kubelet[3539]: E1105 23:44:04.390477 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.390560 kubelet[3539]: W1105 23:44:04.390549 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.390640 kubelet[3539]: E1105 23:44:04.390630 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.390713 kubelet[3539]: I1105 23:44:04.390702 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/427c3b5f-d7ee-4425-8185-ed4318a97b1f-kubelet-dir\") pod \"csi-node-driver-fpftq\" (UID: \"427c3b5f-d7ee-4425-8185-ed4318a97b1f\") " pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:04.391565 kubelet[3539]: E1105 23:44:04.391543 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.391565 kubelet[3539]: W1105 23:44:04.391562 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.391651 kubelet[3539]: E1105 23:44:04.391588 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.391830 kubelet[3539]: E1105 23:44:04.391723 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.391830 kubelet[3539]: W1105 23:44:04.391735 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.391830 kubelet[3539]: E1105 23:44:04.391742 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.392036 kubelet[3539]: E1105 23:44:04.392023 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.392178 kubelet[3539]: W1105 23:44:04.392109 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.392178 kubelet[3539]: E1105 23:44:04.392125 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.392322 kubelet[3539]: I1105 23:44:04.392252 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/427c3b5f-d7ee-4425-8185-ed4318a97b1f-varrun\") pod \"csi-node-driver-fpftq\" (UID: \"427c3b5f-d7ee-4425-8185-ed4318a97b1f\") " pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:04.392376 kubelet[3539]: E1105 23:44:04.392354 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.392376 kubelet[3539]: W1105 23:44:04.392371 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.392428 kubelet[3539]: E1105 23:44:04.392384 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.392503 kubelet[3539]: E1105 23:44:04.392490 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.392503 kubelet[3539]: W1105 23:44:04.392499 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.392603 kubelet[3539]: E1105 23:44:04.392505 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.392754 kubelet[3539]: E1105 23:44:04.392737 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.392754 kubelet[3539]: W1105 23:44:04.392750 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.392808 kubelet[3539]: E1105 23:44:04.392762 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.393387 kubelet[3539]: I1105 23:44:04.393352 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/427c3b5f-d7ee-4425-8185-ed4318a97b1f-registration-dir\") pod \"csi-node-driver-fpftq\" (UID: \"427c3b5f-d7ee-4425-8185-ed4318a97b1f\") " pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:04.393606 kubelet[3539]: E1105 23:44:04.393563 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.393606 kubelet[3539]: W1105 23:44:04.393589 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.393606 kubelet[3539]: E1105 23:44:04.393600 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.393693 kubelet[3539]: I1105 23:44:04.393630 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/427c3b5f-d7ee-4425-8185-ed4318a97b1f-socket-dir\") pod \"csi-node-driver-fpftq\" (UID: \"427c3b5f-d7ee-4425-8185-ed4318a97b1f\") " pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:04.393801 kubelet[3539]: E1105 23:44:04.393776 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.393801 kubelet[3539]: W1105 23:44:04.393789 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.393801 kubelet[3539]: E1105 23:44:04.393796 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.393801 kubelet[3539]: I1105 23:44:04.393810 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqrt5\" (UniqueName: \"kubernetes.io/projected/427c3b5f-d7ee-4425-8185-ed4318a97b1f-kube-api-access-kqrt5\") pod \"csi-node-driver-fpftq\" (UID: \"427c3b5f-d7ee-4425-8185-ed4318a97b1f\") " pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:04.394115 kubelet[3539]: E1105 23:44:04.394092 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.394193 kubelet[3539]: W1105 23:44:04.394162 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.394271 kubelet[3539]: E1105 23:44:04.394178 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.394457 kubelet[3539]: E1105 23:44:04.394446 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.394611 kubelet[3539]: W1105 23:44:04.394512 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.394611 kubelet[3539]: E1105 23:44:04.394526 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.395099 kubelet[3539]: E1105 23:44:04.394989 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.395099 kubelet[3539]: W1105 23:44:04.395019 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.395099 kubelet[3539]: E1105 23:44:04.395032 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.395282 kubelet[3539]: E1105 23:44:04.395271 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.395392 kubelet[3539]: W1105 23:44:04.395336 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.395392 kubelet[3539]: E1105 23:44:04.395352 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.395832 kubelet[3539]: E1105 23:44:04.395565 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.395832 kubelet[3539]: W1105 23:44:04.395612 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.396033 kubelet[3539]: E1105 23:44:04.395920 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.396167 kubelet[3539]: E1105 23:44:04.396157 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.396260 kubelet[3539]: W1105 23:44:04.396228 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.396260 kubelet[3539]: E1105 23:44:04.396242 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.428882 containerd[1909]: time="2025-11-05T23:44:04.428664601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z5pzf,Uid:a0c2d129-48a8-4018-b1c0-8b1b53f92e26,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:04.469654 containerd[1909]: time="2025-11-05T23:44:04.468702614Z" level=info msg="connecting to shim 92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727" address="unix:///run/containerd/s/9db034db6372c80fdf55d9d1005371b9b1d72ecbb92ccb58af61a56a82637ec1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:04.487702 systemd[1]: Started cri-containerd-92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727.scope - libcontainer container 92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727. Nov 5 23:44:04.495560 kubelet[3539]: E1105 23:44:04.495536 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.495560 kubelet[3539]: W1105 23:44:04.495556 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.496029 kubelet[3539]: E1105 23:44:04.495593 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.496104 kubelet[3539]: E1105 23:44:04.496093 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.496124 kubelet[3539]: W1105 23:44:04.496105 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.496124 kubelet[3539]: E1105 23:44:04.496117 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.496614 kubelet[3539]: E1105 23:44:04.496497 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.496614 kubelet[3539]: W1105 23:44:04.496609 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.496752 kubelet[3539]: E1105 23:44:04.496623 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.496933 kubelet[3539]: E1105 23:44:04.496918 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.496933 kubelet[3539]: W1105 23:44:04.496930 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.496992 kubelet[3539]: E1105 23:44:04.496940 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.497416 kubelet[3539]: E1105 23:44:04.497398 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.497416 kubelet[3539]: W1105 23:44:04.497414 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.497467 kubelet[3539]: E1105 23:44:04.497424 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.497758 kubelet[3539]: E1105 23:44:04.497739 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.497758 kubelet[3539]: W1105 23:44:04.497755 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.497873 kubelet[3539]: E1105 23:44:04.497765 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.498956 kubelet[3539]: E1105 23:44:04.498937 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.498956 kubelet[3539]: W1105 23:44:04.498953 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.499054 kubelet[3539]: E1105 23:44:04.499035 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.499456 kubelet[3539]: E1105 23:44:04.499438 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.499456 kubelet[3539]: W1105 23:44:04.499453 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.499514 kubelet[3539]: E1105 23:44:04.499463 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.499618 kubelet[3539]: E1105 23:44:04.499605 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.499618 kubelet[3539]: W1105 23:44:04.499614 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.499710 kubelet[3539]: E1105 23:44:04.499622 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.500147 kubelet[3539]: E1105 23:44:04.500109 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.500147 kubelet[3539]: W1105 23:44:04.500122 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.500147 kubelet[3539]: E1105 23:44:04.500133 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.500340 kubelet[3539]: E1105 23:44:04.500277 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.500340 kubelet[3539]: W1105 23:44:04.500285 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.500340 kubelet[3539]: E1105 23:44:04.500292 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.500509 kubelet[3539]: E1105 23:44:04.500416 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.500509 kubelet[3539]: W1105 23:44:04.500423 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.500509 kubelet[3539]: E1105 23:44:04.500430 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.500766 kubelet[3539]: E1105 23:44:04.500557 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.500766 kubelet[3539]: W1105 23:44:04.500564 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.500766 kubelet[3539]: E1105 23:44:04.500590 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.500766 kubelet[3539]: E1105 23:44:04.500704 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.500766 kubelet[3539]: W1105 23:44:04.500710 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.500766 kubelet[3539]: E1105 23:44:04.500716 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.501019 kubelet[3539]: E1105 23:44:04.500857 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.501019 kubelet[3539]: W1105 23:44:04.500873 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.501019 kubelet[3539]: E1105 23:44:04.500879 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.501194 kubelet[3539]: E1105 23:44:04.501173 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.501725 kubelet[3539]: W1105 23:44:04.501188 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.501763 kubelet[3539]: E1105 23:44:04.501730 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.502864 kubelet[3539]: E1105 23:44:04.502765 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.502864 kubelet[3539]: W1105 23:44:04.502863 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.502944 kubelet[3539]: E1105 23:44:04.502877 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.503246 kubelet[3539]: E1105 23:44:04.503227 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.503246 kubelet[3539]: W1105 23:44:04.503243 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.503313 kubelet[3539]: E1105 23:44:04.503256 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.503835 kubelet[3539]: E1105 23:44:04.503810 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.503835 kubelet[3539]: W1105 23:44:04.503828 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.503835 kubelet[3539]: E1105 23:44:04.503838 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.505918 kubelet[3539]: E1105 23:44:04.505897 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.505918 kubelet[3539]: W1105 23:44:04.505912 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.506001 kubelet[3539]: E1105 23:44:04.505923 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.506460 kubelet[3539]: E1105 23:44:04.506442 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.506460 kubelet[3539]: W1105 23:44:04.506456 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.506536 kubelet[3539]: E1105 23:44:04.506466 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.506760 kubelet[3539]: E1105 23:44:04.506739 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.506760 kubelet[3539]: W1105 23:44:04.506754 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.506870 kubelet[3539]: E1105 23:44:04.506764 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.507498 kubelet[3539]: E1105 23:44:04.507454 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.507672 kubelet[3539]: W1105 23:44:04.507652 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.507672 kubelet[3539]: E1105 23:44:04.507672 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.508950 kubelet[3539]: E1105 23:44:04.508692 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.508950 kubelet[3539]: W1105 23:44:04.508707 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.508950 kubelet[3539]: E1105 23:44:04.508730 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.508950 kubelet[3539]: E1105 23:44:04.508888 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.508950 kubelet[3539]: W1105 23:44:04.508895 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.508950 kubelet[3539]: E1105 23:44:04.508903 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.511227 kubelet[3539]: E1105 23:44:04.511205 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:04.511227 kubelet[3539]: W1105 23:44:04.511220 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:04.511227 kubelet[3539]: E1105 23:44:04.511230 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:04.517476 containerd[1909]: time="2025-11-05T23:44:04.517449761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z5pzf,Uid:a0c2d129-48a8-4018-b1c0-8b1b53f92e26,Namespace:calico-system,Attempt:0,} returns sandbox id \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\"" Nov 5 23:44:06.634629 kubelet[3539]: E1105 23:44:06.634454 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:07.549376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067049894.mount: Deactivated successfully. Nov 5 23:44:08.470996 containerd[1909]: time="2025-11-05T23:44:08.470934137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:08.474518 containerd[1909]: time="2025-11-05T23:44:08.474245325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 23:44:08.519195 containerd[1909]: time="2025-11-05T23:44:08.519136841Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:08.582411 containerd[1909]: time="2025-11-05T23:44:08.582338102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:08.583066 containerd[1909]: time="2025-11-05T23:44:08.582769521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 4.272417056s" Nov 5 23:44:08.583066 containerd[1909]: time="2025-11-05T23:44:08.582798905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 23:44:08.583699 containerd[1909]: time="2025-11-05T23:44:08.583675888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 23:44:08.633406 containerd[1909]: time="2025-11-05T23:44:08.632413342Z" level=info msg="CreateContainer within sandbox \"c053ed9243bc4f14299da41d95ab1893d23000df96e01fb6bcce82b7ebc33093\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 23:44:08.634940 kubelet[3539]: E1105 23:44:08.634907 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:08.823612 containerd[1909]: time="2025-11-05T23:44:08.822177801Z" level=info msg="Container 5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:08.982971 containerd[1909]: time="2025-11-05T23:44:08.982913515Z" level=info msg="CreateContainer within sandbox \"c053ed9243bc4f14299da41d95ab1893d23000df96e01fb6bcce82b7ebc33093\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3\"" Nov 5 23:44:08.983721 containerd[1909]: time="2025-11-05T23:44:08.983504202Z" level=info msg="StartContainer for \"5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3\"" Nov 5 23:44:08.984639 containerd[1909]: time="2025-11-05T23:44:08.984605534Z" level=info msg="connecting to shim 5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3" address="unix:///run/containerd/s/bee2d6992e07450921242a88dba0189e8de52b5bd0b3ca09d1d30350872b05d8" protocol=ttrpc version=3 Nov 5 23:44:09.007714 systemd[1]: Started cri-containerd-5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3.scope - libcontainer container 5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3. Nov 5 23:44:09.121737 containerd[1909]: time="2025-11-05T23:44:09.121316382Z" level=info msg="StartContainer for \"5b2a4b58dee128baf9b22b59e16d4cae2c9033d649fa007e9b9de97325934ef3\" returns successfully" Nov 5 23:44:09.823842 kubelet[3539]: E1105 23:44:09.823715 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.823842 kubelet[3539]: W1105 23:44:09.823745 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.823842 kubelet[3539]: E1105 23:44:09.823769 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.824493 kubelet[3539]: E1105 23:44:09.824349 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.824493 kubelet[3539]: W1105 23:44:09.824363 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.824493 kubelet[3539]: E1105 23:44:09.824406 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.824790 kubelet[3539]: E1105 23:44:09.824681 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.824790 kubelet[3539]: W1105 23:44:09.824693 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.824790 kubelet[3539]: E1105 23:44:09.824706 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.824942 kubelet[3539]: E1105 23:44:09.824929 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.824995 kubelet[3539]: W1105 23:44:09.824985 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.825047 kubelet[3539]: E1105 23:44:09.825035 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.825313 kubelet[3539]: E1105 23:44:09.825225 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.825313 kubelet[3539]: W1105 23:44:09.825236 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.825313 kubelet[3539]: E1105 23:44:09.825244 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.825456 kubelet[3539]: E1105 23:44:09.825443 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.825506 kubelet[3539]: W1105 23:44:09.825496 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.825647 kubelet[3539]: E1105 23:44:09.825545 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.825747 kubelet[3539]: E1105 23:44:09.825735 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.825885 kubelet[3539]: W1105 23:44:09.825787 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.825885 kubelet[3539]: E1105 23:44:09.825801 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.826002 kubelet[3539]: E1105 23:44:09.825990 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.826052 kubelet[3539]: W1105 23:44:09.826042 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.826179 kubelet[3539]: E1105 23:44:09.826088 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.826265 kubelet[3539]: E1105 23:44:09.826255 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.826318 kubelet[3539]: W1105 23:44:09.826308 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.826432 kubelet[3539]: E1105 23:44:09.826354 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.826528 kubelet[3539]: E1105 23:44:09.826517 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.826682 kubelet[3539]: W1105 23:44:09.826568 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.826682 kubelet[3539]: E1105 23:44:09.826599 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.826798 kubelet[3539]: E1105 23:44:09.826787 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.826845 kubelet[3539]: W1105 23:44:09.826836 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.826965 kubelet[3539]: E1105 23:44:09.826879 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.827061 kubelet[3539]: E1105 23:44:09.827050 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.827111 kubelet[3539]: W1105 23:44:09.827101 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.827161 kubelet[3539]: E1105 23:44:09.827152 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.827409 kubelet[3539]: E1105 23:44:09.827328 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.827409 kubelet[3539]: W1105 23:44:09.827338 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.827409 kubelet[3539]: E1105 23:44:09.827346 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.827543 kubelet[3539]: E1105 23:44:09.827533 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.827621 kubelet[3539]: W1105 23:44:09.827610 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.827678 kubelet[3539]: E1105 23:44:09.827667 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.827916 kubelet[3539]: E1105 23:44:09.827842 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.827916 kubelet[3539]: W1105 23:44:09.827852 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.827916 kubelet[3539]: E1105 23:44:09.827860 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.833303 kubelet[3539]: E1105 23:44:09.833277 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.833303 kubelet[3539]: W1105 23:44:09.833292 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.833389 kubelet[3539]: E1105 23:44:09.833317 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.833486 kubelet[3539]: E1105 23:44:09.833475 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.833486 kubelet[3539]: W1105 23:44:09.833483 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.833538 kubelet[3539]: E1105 23:44:09.833491 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.833691 kubelet[3539]: E1105 23:44:09.833667 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.833691 kubelet[3539]: W1105 23:44:09.833677 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.833691 kubelet[3539]: E1105 23:44:09.833685 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.834131 kubelet[3539]: E1105 23:44:09.833855 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.834131 kubelet[3539]: W1105 23:44:09.833862 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.834131 kubelet[3539]: E1105 23:44:09.833870 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.834131 kubelet[3539]: E1105 23:44:09.834001 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.834131 kubelet[3539]: W1105 23:44:09.834009 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.834131 kubelet[3539]: E1105 23:44:09.834016 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.834343 kubelet[3539]: E1105 23:44:09.834318 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.834343 kubelet[3539]: W1105 23:44:09.834334 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.834343 kubelet[3539]: E1105 23:44:09.834344 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.834738 kubelet[3539]: E1105 23:44:09.834719 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.834738 kubelet[3539]: W1105 23:44:09.834735 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.834806 kubelet[3539]: E1105 23:44:09.834745 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.834902 kubelet[3539]: E1105 23:44:09.834889 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.834902 kubelet[3539]: W1105 23:44:09.834898 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.834955 kubelet[3539]: E1105 23:44:09.834905 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.835323 kubelet[3539]: E1105 23:44:09.835305 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.835323 kubelet[3539]: W1105 23:44:09.835319 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.835369 kubelet[3539]: E1105 23:44:09.835328 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.835450 kubelet[3539]: E1105 23:44:09.835438 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.835450 kubelet[3539]: W1105 23:44:09.835446 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.835506 kubelet[3539]: E1105 23:44:09.835453 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.835632 kubelet[3539]: E1105 23:44:09.835618 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.835632 kubelet[3539]: W1105 23:44:09.835628 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.835689 kubelet[3539]: E1105 23:44:09.835635 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.835770 kubelet[3539]: E1105 23:44:09.835758 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.835770 kubelet[3539]: W1105 23:44:09.835766 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.835813 kubelet[3539]: E1105 23:44:09.835773 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.835885 kubelet[3539]: E1105 23:44:09.835871 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.835885 kubelet[3539]: W1105 23:44:09.835879 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.835933 kubelet[3539]: E1105 23:44:09.835885 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.836011 kubelet[3539]: E1105 23:44:09.835999 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.836011 kubelet[3539]: W1105 23:44:09.836007 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.836054 kubelet[3539]: E1105 23:44:09.836012 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.836194 kubelet[3539]: E1105 23:44:09.836181 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.836194 kubelet[3539]: W1105 23:44:09.836190 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.836235 kubelet[3539]: E1105 23:44:09.836196 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.836313 kubelet[3539]: E1105 23:44:09.836300 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.836313 kubelet[3539]: W1105 23:44:09.836310 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.836347 kubelet[3539]: E1105 23:44:09.836317 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.836603 kubelet[3539]: E1105 23:44:09.836586 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.836603 kubelet[3539]: W1105 23:44:09.836598 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.836648 kubelet[3539]: E1105 23:44:09.836605 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:09.836733 kubelet[3539]: E1105 23:44:09.836721 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:09.836733 kubelet[3539]: W1105 23:44:09.836729 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:09.836769 kubelet[3539]: E1105 23:44:09.836737 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879017 kubelet[3539]: E1105 23:44:10.634890 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:10.879017 kubelet[3539]: I1105 23:44:10.728027 3539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:44:10.879017 kubelet[3539]: E1105 23:44:10.733879 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879017 kubelet[3539]: W1105 23:44:10.733896 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879017 kubelet[3539]: E1105 23:44:10.733913 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879017 kubelet[3539]: E1105 23:44:10.734065 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879017 kubelet[3539]: W1105 23:44:10.734071 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879017 kubelet[3539]: E1105 23:44:10.734108 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879017 kubelet[3539]: E1105 23:44:10.734229 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879017 kubelet[3539]: W1105 23:44:10.734235 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734241 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734359 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879493 kubelet[3539]: W1105 23:44:10.734364 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734370 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734491 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879493 kubelet[3539]: W1105 23:44:10.734496 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734502 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734630 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879493 kubelet[3539]: W1105 23:44:10.734636 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879493 kubelet[3539]: E1105 23:44:10.734642 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.734758 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879685 kubelet[3539]: W1105 23:44:10.734763 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.734768 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.734881 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879685 kubelet[3539]: W1105 23:44:10.734887 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.734892 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.735010 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879685 kubelet[3539]: W1105 23:44:10.735015 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.735020 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879685 kubelet[3539]: E1105 23:44:10.735122 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879820 kubelet[3539]: W1105 23:44:10.735126 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879820 kubelet[3539]: E1105 23:44:10.735131 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879820 kubelet[3539]: E1105 23:44:10.735232 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879820 kubelet[3539]: W1105 23:44:10.735236 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879820 kubelet[3539]: E1105 23:44:10.735241 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879820 kubelet[3539]: E1105 23:44:10.735349 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879820 kubelet[3539]: W1105 23:44:10.735353 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879820 kubelet[3539]: E1105 23:44:10.735359 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879820 kubelet[3539]: E1105 23:44:10.735475 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879820 kubelet[3539]: W1105 23:44:10.735480 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.735485 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.735597 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879959 kubelet[3539]: W1105 23:44:10.735603 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.735608 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.735712 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879959 kubelet[3539]: W1105 23:44:10.735716 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.735722 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.738014 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.879959 kubelet[3539]: W1105 23:44:10.738022 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.879959 kubelet[3539]: E1105 23:44:10.738029 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738167 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880138 kubelet[3539]: W1105 23:44:10.738173 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738179 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738330 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880138 kubelet[3539]: W1105 23:44:10.738335 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738341 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738532 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880138 kubelet[3539]: W1105 23:44:10.738547 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738558 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880138 kubelet[3539]: E1105 23:44:10.738703 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880277 kubelet[3539]: W1105 23:44:10.738710 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880277 kubelet[3539]: E1105 23:44:10.738716 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880277 kubelet[3539]: E1105 23:44:10.738830 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880277 kubelet[3539]: W1105 23:44:10.738836 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880277 kubelet[3539]: E1105 23:44:10.738841 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880277 kubelet[3539]: E1105 23:44:10.738973 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880277 kubelet[3539]: W1105 23:44:10.738979 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880277 kubelet[3539]: E1105 23:44:10.738985 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880277 kubelet[3539]: E1105 23:44:10.739165 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880277 kubelet[3539]: W1105 23:44:10.739173 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739181 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739311 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880416 kubelet[3539]: W1105 23:44:10.739324 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739330 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739433 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880416 kubelet[3539]: W1105 23:44:10.739438 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739444 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739585 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880416 kubelet[3539]: W1105 23:44:10.739592 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880416 kubelet[3539]: E1105 23:44:10.739599 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.739818 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880553 kubelet[3539]: W1105 23:44:10.739824 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.739832 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.739966 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880553 kubelet[3539]: W1105 23:44:10.739973 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.739979 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.740084 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880553 kubelet[3539]: W1105 23:44:10.740090 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.740095 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880553 kubelet[3539]: E1105 23:44:10.740195 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880736 kubelet[3539]: W1105 23:44:10.740200 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880736 kubelet[3539]: E1105 23:44:10.740205 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880736 kubelet[3539]: E1105 23:44:10.740327 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880736 kubelet[3539]: W1105 23:44:10.740332 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880736 kubelet[3539]: E1105 23:44:10.740337 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880736 kubelet[3539]: E1105 23:44:10.740504 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880736 kubelet[3539]: W1105 23:44:10.740513 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880736 kubelet[3539]: E1105 23:44:10.740519 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:10.880736 kubelet[3539]: E1105 23:44:10.740747 3539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:10.880736 kubelet[3539]: W1105 23:44:10.740754 3539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:10.880870 kubelet[3539]: E1105 23:44:10.740762 3539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:12.635073 kubelet[3539]: E1105 23:44:12.634865 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:12.729175 containerd[1909]: time="2025-11-05T23:44:12.729120175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:12.774231 containerd[1909]: time="2025-11-05T23:44:12.774030924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 23:44:12.777547 containerd[1909]: time="2025-11-05T23:44:12.777517733Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:12.824327 containerd[1909]: time="2025-11-05T23:44:12.823881980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:12.824441 containerd[1909]: time="2025-11-05T23:44:12.824348749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 4.240648188s" Nov 5 23:44:12.824441 containerd[1909]: time="2025-11-05T23:44:12.824373782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 23:44:12.867315 containerd[1909]: time="2025-11-05T23:44:12.867223659Z" level=info msg="CreateContainer within sandbox \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 23:44:13.030835 containerd[1909]: time="2025-11-05T23:44:13.030671943Z" level=info msg="Container 9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:13.175879 containerd[1909]: time="2025-11-05T23:44:13.175835119Z" level=info msg="CreateContainer within sandbox \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\"" Nov 5 23:44:13.176711 containerd[1909]: time="2025-11-05T23:44:13.176682530Z" level=info msg="StartContainer for \"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\"" Nov 5 23:44:13.177815 containerd[1909]: time="2025-11-05T23:44:13.177693761Z" level=info msg="connecting to shim 9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c" address="unix:///run/containerd/s/9db034db6372c80fdf55d9d1005371b9b1d72ecbb92ccb58af61a56a82637ec1" protocol=ttrpc version=3 Nov 5 23:44:13.199713 systemd[1]: Started cri-containerd-9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c.scope - libcontainer container 9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c. Nov 5 23:44:13.230691 containerd[1909]: time="2025-11-05T23:44:13.230648127Z" level=info msg="StartContainer for \"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\" returns successfully" Nov 5 23:44:13.238605 systemd[1]: cri-containerd-9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c.scope: Deactivated successfully. Nov 5 23:44:13.241761 containerd[1909]: time="2025-11-05T23:44:13.241720539Z" level=info msg="received exit event container_id:\"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\" id:\"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\" pid:4270 exited_at:{seconds:1762386253 nanos:241300294}" Nov 5 23:44:13.241949 containerd[1909]: time="2025-11-05T23:44:13.241919953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\" id:\"9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c\" pid:4270 exited_at:{seconds:1762386253 nanos:241300294}" Nov 5 23:44:13.261470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f9f87b4d7f56b431011bf11b44e456b3217ed1176367500aa43444d6d91986c-rootfs.mount: Deactivated successfully. Nov 5 23:44:13.748654 kubelet[3539]: I1105 23:44:13.748549 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c5b757c75-sb5cs" podStartSLOduration=6.474820935 podStartE2EDuration="10.748531588s" podCreationTimestamp="2025-11-05 23:44:03 +0000 UTC" firstStartedPulling="2025-11-05 23:44:04.309796134 +0000 UTC m=+26.768150889" lastFinishedPulling="2025-11-05 23:44:08.583506683 +0000 UTC m=+31.041861542" observedRunningTime="2025-11-05 23:44:09.739213756 +0000 UTC m=+32.197568519" watchObservedRunningTime="2025-11-05 23:44:13.748531588 +0000 UTC m=+36.206886343" Nov 5 23:44:14.634993 kubelet[3539]: E1105 23:44:14.634937 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:16.634789 kubelet[3539]: E1105 23:44:16.634687 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:18.635095 kubelet[3539]: E1105 23:44:18.635040 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:20.635086 kubelet[3539]: E1105 23:44:20.634725 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:20.749367 containerd[1909]: time="2025-11-05T23:44:20.748214209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 23:44:22.634653 kubelet[3539]: E1105 23:44:22.634580 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:22.888748 containerd[1909]: time="2025-11-05T23:44:22.888480368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:22.894909 containerd[1909]: time="2025-11-05T23:44:22.894869758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 23:44:22.899285 containerd[1909]: time="2025-11-05T23:44:22.899235499Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:22.903379 containerd[1909]: time="2025-11-05T23:44:22.903338775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:22.904289 containerd[1909]: time="2025-11-05T23:44:22.904205387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.155954696s" Nov 5 23:44:22.904289 containerd[1909]: time="2025-11-05T23:44:22.904229971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 23:44:22.911061 containerd[1909]: time="2025-11-05T23:44:22.910986845Z" level=info msg="CreateContainer within sandbox \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 23:44:22.935215 containerd[1909]: time="2025-11-05T23:44:22.934561580Z" level=info msg="Container 2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:22.956407 containerd[1909]: time="2025-11-05T23:44:22.956361898Z" level=info msg="CreateContainer within sandbox \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\"" Nov 5 23:44:22.957135 containerd[1909]: time="2025-11-05T23:44:22.957106418Z" level=info msg="StartContainer for \"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\"" Nov 5 23:44:22.958316 containerd[1909]: time="2025-11-05T23:44:22.958276983Z" level=info msg="connecting to shim 2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711" address="unix:///run/containerd/s/9db034db6372c80fdf55d9d1005371b9b1d72ecbb92ccb58af61a56a82637ec1" protocol=ttrpc version=3 Nov 5 23:44:22.978711 systemd[1]: Started cri-containerd-2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711.scope - libcontainer container 2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711. Nov 5 23:44:23.009627 containerd[1909]: time="2025-11-05T23:44:23.009555586Z" level=info msg="StartContainer for \"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\" returns successfully" Nov 5 23:44:24.635076 kubelet[3539]: E1105 23:44:24.635019 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:26.634580 kubelet[3539]: E1105 23:44:26.634527 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:27.736587 containerd[1909]: time="2025-11-05T23:44:27.736534339Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 23:44:27.738546 systemd[1]: cri-containerd-2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711.scope: Deactivated successfully. Nov 5 23:44:27.739479 systemd[1]: cri-containerd-2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711.scope: Consumed 320ms CPU time, 186.2M memory peak, 165.9M written to disk. Nov 5 23:44:27.742316 containerd[1909]: time="2025-11-05T23:44:27.742281004Z" level=info msg="received exit event container_id:\"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\" id:\"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\" pid:4335 exited_at:{seconds:1762386267 nanos:742047085}" Nov 5 23:44:27.742558 containerd[1909]: time="2025-11-05T23:44:27.742432713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\" id:\"2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711\" pid:4335 exited_at:{seconds:1762386267 nanos:742047085}" Nov 5 23:44:27.757129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d6c21bbab9212e83b780f1dfa7333a8ea97de7ef10734e0518194ce9587f711-rootfs.mount: Deactivated successfully. Nov 5 23:44:27.822957 kubelet[3539]: I1105 23:44:27.822898 3539 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 23:44:32.329709 systemd[1]: Created slice kubepods-burstable-pod4d92b57a_8924_49e1_8363_5659eabb3319.slice - libcontainer container kubepods-burstable-pod4d92b57a_8924_49e1_8363_5659eabb3319.slice. Nov 5 23:44:32.365228 kubelet[3539]: I1105 23:44:32.365193 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d92b57a-8924-49e1-8363-5659eabb3319-config-volume\") pod \"coredns-674b8bbfcf-4f9tn\" (UID: \"4d92b57a-8924-49e1-8363-5659eabb3319\") " pod="kube-system/coredns-674b8bbfcf-4f9tn" Nov 5 23:44:32.365668 kubelet[3539]: I1105 23:44:32.365617 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvkr\" (UniqueName: \"kubernetes.io/projected/4d92b57a-8924-49e1-8363-5659eabb3319-kube-api-access-5xvkr\") pod \"coredns-674b8bbfcf-4f9tn\" (UID: \"4d92b57a-8924-49e1-8363-5659eabb3319\") " pod="kube-system/coredns-674b8bbfcf-4f9tn" Nov 5 23:44:32.532096 systemd[1]: Created slice kubepods-besteffort-pod427c3b5f_d7ee_4425_8185_ed4318a97b1f.slice - libcontainer container kubepods-besteffort-pod427c3b5f_d7ee_4425_8185_ed4318a97b1f.slice. Nov 5 23:44:32.534159 containerd[1909]: time="2025-11-05T23:44:32.534123217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpftq,Uid:427c3b5f-d7ee-4425-8185-ed4318a97b1f,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:32.536893 systemd[1]: Created slice kubepods-besteffort-podd5bad4f5_f9c2_41af_9172_91bd2034e4ca.slice - libcontainer container kubepods-besteffort-podd5bad4f5_f9c2_41af_9172_91bd2034e4ca.slice. Nov 5 23:44:32.566915 kubelet[3539]: I1105 23:44:32.566813 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-backend-key-pair\") pod \"whisker-57bf584d79-5fnzf\" (UID: \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\") " pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:32.566915 kubelet[3539]: I1105 23:44:32.566865 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzf6k\" (UniqueName: \"kubernetes.io/projected/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-kube-api-access-lzf6k\") pod \"whisker-57bf584d79-5fnzf\" (UID: \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\") " pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:32.566915 kubelet[3539]: I1105 23:44:32.566883 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-ca-bundle\") pod \"whisker-57bf584d79-5fnzf\" (UID: \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\") " pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:32.583441 systemd[1]: Created slice kubepods-besteffort-pod9f589f34_97ee_4d82_b7d6_bdd22dcbc743.slice - libcontainer container kubepods-besteffort-pod9f589f34_97ee_4d82_b7d6_bdd22dcbc743.slice. Nov 5 23:44:32.632493 containerd[1909]: time="2025-11-05T23:44:32.632441448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f9tn,Uid:4d92b57a-8924-49e1-8363-5659eabb3319,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:32.690287 systemd[1]: Created slice kubepods-besteffort-pod1177c853_8b74_4ffe_9eed_6c7edaf39ab6.slice - libcontainer container kubepods-besteffort-pod1177c853_8b74_4ffe_9eed_6c7edaf39ab6.slice. Nov 5 23:44:32.721664 containerd[1909]: time="2025-11-05T23:44:32.721624865Z" level=error msg="Failed to destroy network for sandbox \"1e248785f3a759b3e5d4d6c179fa0f59ea736809896a2c8672ee6d08a593de9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:32.722927 systemd[1]: run-netns-cni\x2d9653b65a\x2d76d9\x2dac66\x2d3b00\x2d2281e6a6a7fa.mount: Deactivated successfully. Nov 5 23:44:32.732170 containerd[1909]: time="2025-11-05T23:44:32.732129098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpftq,Uid:427c3b5f-d7ee-4425-8185-ed4318a97b1f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e248785f3a759b3e5d4d6c179fa0f59ea736809896a2c8672ee6d08a593de9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:32.776163 kubelet[3539]: E1105 23:44:32.732403 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e248785f3a759b3e5d4d6c179fa0f59ea736809896a2c8672ee6d08a593de9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:32.776163 kubelet[3539]: E1105 23:44:32.732445 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e248785f3a759b3e5d4d6c179fa0f59ea736809896a2c8672ee6d08a593de9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:32.776163 kubelet[3539]: E1105 23:44:32.732459 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e248785f3a759b3e5d4d6c179fa0f59ea736809896a2c8672ee6d08a593de9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:32.737890 systemd[1]: Created slice kubepods-besteffort-pod486c3bf3_5c4f_4ba5_b692_994994d35c51.slice - libcontainer container kubepods-besteffort-pod486c3bf3_5c4f_4ba5_b692_994994d35c51.slice. Nov 5 23:44:32.776317 kubelet[3539]: E1105 23:44:32.732493 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e248785f3a759b3e5d4d6c179fa0f59ea736809896a2c8672ee6d08a593de9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:32.776317 kubelet[3539]: I1105 23:44:32.767883 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jk5\" (UniqueName: \"kubernetes.io/projected/486c3bf3-5c4f-4ba5-b692-994994d35c51-kube-api-access-44jk5\") pod \"calico-apiserver-c6d9d55f-625lj\" (UID: \"486c3bf3-5c4f-4ba5-b692-994994d35c51\") " pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" Nov 5 23:44:32.776317 kubelet[3539]: I1105 23:44:32.767909 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1177c853-8b74-4ffe-9eed-6c7edaf39ab6-tigera-ca-bundle\") pod \"calico-kube-controllers-7769f64cbc-fbmnx\" (UID: \"1177c853-8b74-4ffe-9eed-6c7edaf39ab6\") " pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" Nov 5 23:44:32.776317 kubelet[3539]: I1105 23:44:32.767927 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f589f34-97ee-4d82-b7d6-bdd22dcbc743-calico-apiserver-certs\") pod \"calico-apiserver-66b68578ff-w27fl\" (UID: \"9f589f34-97ee-4d82-b7d6-bdd22dcbc743\") " pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" Nov 5 23:44:32.776404 kubelet[3539]: I1105 23:44:32.767938 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngt8\" (UniqueName: \"kubernetes.io/projected/1177c853-8b74-4ffe-9eed-6c7edaf39ab6-kube-api-access-hngt8\") pod \"calico-kube-controllers-7769f64cbc-fbmnx\" (UID: \"1177c853-8b74-4ffe-9eed-6c7edaf39ab6\") " pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" Nov 5 23:44:32.776404 kubelet[3539]: I1105 23:44:32.767965 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzvk\" (UniqueName: \"kubernetes.io/projected/9f589f34-97ee-4d82-b7d6-bdd22dcbc743-kube-api-access-7hzvk\") pod \"calico-apiserver-66b68578ff-w27fl\" (UID: \"9f589f34-97ee-4d82-b7d6-bdd22dcbc743\") " pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" Nov 5 23:44:32.776404 kubelet[3539]: I1105 23:44:32.767991 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/486c3bf3-5c4f-4ba5-b692-994994d35c51-calico-apiserver-certs\") pod \"calico-apiserver-c6d9d55f-625lj\" (UID: \"486c3bf3-5c4f-4ba5-b692-994994d35c51\") " pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" Nov 5 23:44:32.808322 containerd[1909]: time="2025-11-05T23:44:32.808243537Z" level=error msg="Failed to destroy network for sandbox \"b9f562d0f73caa281813b905fbc3eeafe1a1985ecd8c3b9ff62fd314fb26430a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:32.836330 systemd[1]: Created slice kubepods-besteffort-podffd109d6_81d2_474d_9a1e_5493102832d2.slice - libcontainer container kubepods-besteffort-podffd109d6_81d2_474d_9a1e_5493102832d2.slice. Nov 5 23:44:32.841821 containerd[1909]: time="2025-11-05T23:44:32.841785670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bf584d79-5fnzf,Uid:d5bad4f5-f9c2-41af-9172-91bd2034e4ca,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:32.869104 kubelet[3539]: I1105 23:44:32.869066 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffd109d6-81d2-474d-9a1e-5493102832d2-goldmane-ca-bundle\") pod \"goldmane-666569f655-g8mz5\" (UID: \"ffd109d6-81d2-474d-9a1e-5493102832d2\") " pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:32.869413 kubelet[3539]: I1105 23:44:32.869389 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9nk\" (UniqueName: \"kubernetes.io/projected/ffd109d6-81d2-474d-9a1e-5493102832d2-kube-api-access-dg9nk\") pod \"goldmane-666569f655-g8mz5\" (UID: \"ffd109d6-81d2-474d-9a1e-5493102832d2\") " pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:32.869465 kubelet[3539]: I1105 23:44:32.869447 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd109d6-81d2-474d-9a1e-5493102832d2-config\") pod \"goldmane-666569f655-g8mz5\" (UID: \"ffd109d6-81d2-474d-9a1e-5493102832d2\") " pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:32.869836 kubelet[3539]: I1105 23:44:32.869587 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6cbd4a-e5b2-4ca3-8269-d7430053336d-config-volume\") pod \"coredns-674b8bbfcf-vf7rk\" (UID: \"1a6cbd4a-e5b2-4ca3-8269-d7430053336d\") " pod="kube-system/coredns-674b8bbfcf-vf7rk" Nov 5 23:44:32.869836 kubelet[3539]: I1105 23:44:32.869612 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fg98\" (UniqueName: \"kubernetes.io/projected/1a6cbd4a-e5b2-4ca3-8269-d7430053336d-kube-api-access-4fg98\") pod \"coredns-674b8bbfcf-vf7rk\" (UID: \"1a6cbd4a-e5b2-4ca3-8269-d7430053336d\") " pod="kube-system/coredns-674b8bbfcf-vf7rk" Nov 5 23:44:32.869836 kubelet[3539]: I1105 23:44:32.869626 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ffd109d6-81d2-474d-9a1e-5493102832d2-goldmane-key-pair\") pod \"goldmane-666569f655-g8mz5\" (UID: \"ffd109d6-81d2-474d-9a1e-5493102832d2\") " pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:32.870428 containerd[1909]: time="2025-11-05T23:44:32.870339303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f9tn,Uid:4d92b57a-8924-49e1-8363-5659eabb3319,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f562d0f73caa281813b905fbc3eeafe1a1985ecd8c3b9ff62fd314fb26430a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:32.871477 kubelet[3539]: E1105 23:44:32.871451 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f562d0f73caa281813b905fbc3eeafe1a1985ecd8c3b9ff62fd314fb26430a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:32.919241 kubelet[3539]: E1105 23:44:32.871601 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f562d0f73caa281813b905fbc3eeafe1a1985ecd8c3b9ff62fd314fb26430a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4f9tn" Nov 5 23:44:32.919241 kubelet[3539]: E1105 23:44:32.871623 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f562d0f73caa281813b905fbc3eeafe1a1985ecd8c3b9ff62fd314fb26430a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4f9tn" Nov 5 23:44:32.919241 kubelet[3539]: E1105 23:44:32.871965 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4f9tn_kube-system(4d92b57a-8924-49e1-8363-5659eabb3319)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4f9tn_kube-system(4d92b57a-8924-49e1-8363-5659eabb3319)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f562d0f73caa281813b905fbc3eeafe1a1985ecd8c3b9ff62fd314fb26430a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4f9tn" podUID="4d92b57a-8924-49e1-8363-5659eabb3319" Nov 5 23:44:32.929965 containerd[1909]: time="2025-11-05T23:44:32.929555226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 23:44:32.933659 systemd[1]: Created slice kubepods-burstable-pod1a6cbd4a_e5b2_4ca3_8269_d7430053336d.slice - libcontainer container kubepods-burstable-pod1a6cbd4a_e5b2_4ca3_8269_d7430053336d.slice. Nov 5 23:44:32.938147 systemd[1]: Created slice kubepods-besteffort-poda68e46b0_801c_4548_82e3_d2eb8a4bb9ed.slice - libcontainer container kubepods-besteffort-poda68e46b0_801c_4548_82e3_d2eb8a4bb9ed.slice. Nov 5 23:44:32.971613 kubelet[3539]: I1105 23:44:32.970557 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a68e46b0-801c-4548-82e3-d2eb8a4bb9ed-calico-apiserver-certs\") pod \"calico-apiserver-c6d9d55f-kpv2j\" (UID: \"a68e46b0-801c-4548-82e3-d2eb8a4bb9ed\") " pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" Nov 5 23:44:32.971613 kubelet[3539]: I1105 23:44:32.970601 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjq2\" (UniqueName: \"kubernetes.io/projected/a68e46b0-801c-4548-82e3-d2eb8a4bb9ed-kube-api-access-7cjq2\") pod \"calico-apiserver-c6d9d55f-kpv2j\" (UID: \"a68e46b0-801c-4548-82e3-d2eb8a4bb9ed\") " pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" Nov 5 23:44:32.993183 containerd[1909]: time="2025-11-05T23:44:32.993140991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7769f64cbc-fbmnx,Uid:1177c853-8b74-4ffe-9eed-6c7edaf39ab6,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:33.011823 containerd[1909]: time="2025-11-05T23:44:33.011764720Z" level=error msg="Failed to destroy network for sandbox \"8c42754008052c3e3d72c9504b584f49425d2417ac1469a75bb642de75d05f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.032988 containerd[1909]: time="2025-11-05T23:44:33.032944689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bf584d79-5fnzf,Uid:d5bad4f5-f9c2-41af-9172-91bd2034e4ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c42754008052c3e3d72c9504b584f49425d2417ac1469a75bb642de75d05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.033197 kubelet[3539]: E1105 23:44:33.033158 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c42754008052c3e3d72c9504b584f49425d2417ac1469a75bb642de75d05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.033257 kubelet[3539]: E1105 23:44:33.033212 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c42754008052c3e3d72c9504b584f49425d2417ac1469a75bb642de75d05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:33.033257 kubelet[3539]: E1105 23:44:33.033233 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c42754008052c3e3d72c9504b584f49425d2417ac1469a75bb642de75d05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:33.033398 kubelet[3539]: E1105 23:44:33.033277 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57bf584d79-5fnzf_calico-system(d5bad4f5-f9c2-41af-9172-91bd2034e4ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57bf584d79-5fnzf_calico-system(d5bad4f5-f9c2-41af-9172-91bd2034e4ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c42754008052c3e3d72c9504b584f49425d2417ac1469a75bb642de75d05f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57bf584d79-5fnzf" podUID="d5bad4f5-f9c2-41af-9172-91bd2034e4ca" Nov 5 23:44:33.047496 kubelet[3539]: I1105 23:44:33.047097 3539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:44:33.077451 containerd[1909]: time="2025-11-05T23:44:33.077420701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-625lj,Uid:486c3bf3-5c4f-4ba5-b692-994994d35c51,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:33.123091 containerd[1909]: time="2025-11-05T23:44:33.122936051Z" level=error msg="Failed to destroy network for sandbox \"262e40830e805464e925bd76a6825935fcb9ff33c521167c827726c5873070ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.139868 containerd[1909]: time="2025-11-05T23:44:33.139610414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g8mz5,Uid:ffd109d6-81d2-474d-9a1e-5493102832d2,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:33.172793 containerd[1909]: time="2025-11-05T23:44:33.172755655Z" level=error msg="Failed to destroy network for sandbox \"6297dcbb69fe456725363aff78e30cb6cb572d2c41fbc02a11bea9fe5fcc5b02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.172999 containerd[1909]: time="2025-11-05T23:44:33.172961182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7769f64cbc-fbmnx,Uid:1177c853-8b74-4ffe-9eed-6c7edaf39ab6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"262e40830e805464e925bd76a6825935fcb9ff33c521167c827726c5873070ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.173595 kubelet[3539]: E1105 23:44:33.173449 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262e40830e805464e925bd76a6825935fcb9ff33c521167c827726c5873070ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.173595 kubelet[3539]: E1105 23:44:33.173507 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262e40830e805464e925bd76a6825935fcb9ff33c521167c827726c5873070ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" Nov 5 23:44:33.173595 kubelet[3539]: E1105 23:44:33.173532 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262e40830e805464e925bd76a6825935fcb9ff33c521167c827726c5873070ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" Nov 5 23:44:33.173749 kubelet[3539]: E1105 23:44:33.173727 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"262e40830e805464e925bd76a6825935fcb9ff33c521167c827726c5873070ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:44:33.188980 containerd[1909]: time="2025-11-05T23:44:33.188949444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b68578ff-w27fl,Uid:9f589f34-97ee-4d82-b7d6-bdd22dcbc743,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:33.224424 containerd[1909]: time="2025-11-05T23:44:33.224380764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-625lj,Uid:486c3bf3-5c4f-4ba5-b692-994994d35c51,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6297dcbb69fe456725363aff78e30cb6cb572d2c41fbc02a11bea9fe5fcc5b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.225135 kubelet[3539]: E1105 23:44:33.224821 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6297dcbb69fe456725363aff78e30cb6cb572d2c41fbc02a11bea9fe5fcc5b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.225135 kubelet[3539]: E1105 23:44:33.224870 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6297dcbb69fe456725363aff78e30cb6cb572d2c41fbc02a11bea9fe5fcc5b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" Nov 5 23:44:33.225135 kubelet[3539]: E1105 23:44:33.224885 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6297dcbb69fe456725363aff78e30cb6cb572d2c41fbc02a11bea9fe5fcc5b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" Nov 5 23:44:33.225261 kubelet[3539]: E1105 23:44:33.224919 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6297dcbb69fe456725363aff78e30cb6cb572d2c41fbc02a11bea9fe5fcc5b02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:44:33.237609 containerd[1909]: time="2025-11-05T23:44:33.237544354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf7rk,Uid:1a6cbd4a-e5b2-4ca3-8269-d7430053336d,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:33.241317 containerd[1909]: time="2025-11-05T23:44:33.241290039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-kpv2j,Uid:a68e46b0-801c-4548-82e3-d2eb8a4bb9ed,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:33.310091 containerd[1909]: time="2025-11-05T23:44:33.310048726Z" level=error msg="Failed to destroy network for sandbox \"0456223d467fd1f5de02cfe8b323e65eaa711a894a48855367dc72c8c5100430\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.416307 containerd[1909]: time="2025-11-05T23:44:33.415891778Z" level=error msg="Failed to destroy network for sandbox \"68db32b1375d7d91aa639f1160a4e0de0378fbd9d2168e6a5d419a80ecf61b58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.566408 containerd[1909]: time="2025-11-05T23:44:33.566354967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g8mz5,Uid:ffd109d6-81d2-474d-9a1e-5493102832d2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0456223d467fd1f5de02cfe8b323e65eaa711a894a48855367dc72c8c5100430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.566859 kubelet[3539]: E1105 23:44:33.566583 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0456223d467fd1f5de02cfe8b323e65eaa711a894a48855367dc72c8c5100430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.566859 kubelet[3539]: E1105 23:44:33.566638 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0456223d467fd1f5de02cfe8b323e65eaa711a894a48855367dc72c8c5100430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:33.566859 kubelet[3539]: E1105 23:44:33.566660 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0456223d467fd1f5de02cfe8b323e65eaa711a894a48855367dc72c8c5100430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:33.568288 kubelet[3539]: E1105 23:44:33.566698 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0456223d467fd1f5de02cfe8b323e65eaa711a894a48855367dc72c8c5100430\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:44:33.599747 containerd[1909]: time="2025-11-05T23:44:33.599643693Z" level=error msg="Failed to destroy network for sandbox \"32d450aca44b424d0d0c6620a2a9497998bacdd5a3d4c7df031eb4e9c427d12e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.601941 systemd[1]: run-netns-cni\x2df6b20daa\x2d74cc\x2ddf36\x2d5fda\x2d80cc68803a8b.mount: Deactivated successfully. Nov 5 23:44:33.663033 containerd[1909]: time="2025-11-05T23:44:33.662971913Z" level=error msg="Failed to destroy network for sandbox \"66ccb51a9c3b71c5f95ddca6196a5766b1bcbd6cab45b20dcdd23830e66dbfc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.664477 systemd[1]: run-netns-cni\x2d142174ef\x2db8fd\x2da12d\x2d7333\x2dc5ebbee20723.mount: Deactivated successfully. Nov 5 23:44:33.668984 containerd[1909]: time="2025-11-05T23:44:33.668870875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b68578ff-w27fl,Uid:9f589f34-97ee-4d82-b7d6-bdd22dcbc743,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68db32b1375d7d91aa639f1160a4e0de0378fbd9d2168e6a5d419a80ecf61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.669235 kubelet[3539]: E1105 23:44:33.669195 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68db32b1375d7d91aa639f1160a4e0de0378fbd9d2168e6a5d419a80ecf61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.669387 kubelet[3539]: E1105 23:44:33.669338 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68db32b1375d7d91aa639f1160a4e0de0378fbd9d2168e6a5d419a80ecf61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" Nov 5 23:44:33.669387 kubelet[3539]: E1105 23:44:33.669361 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68db32b1375d7d91aa639f1160a4e0de0378fbd9d2168e6a5d419a80ecf61b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" Nov 5 23:44:33.669516 kubelet[3539]: E1105 23:44:33.669496 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68db32b1375d7d91aa639f1160a4e0de0378fbd9d2168e6a5d419a80ecf61b58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:44:33.727946 containerd[1909]: time="2025-11-05T23:44:33.727904032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf7rk,Uid:1a6cbd4a-e5b2-4ca3-8269-d7430053336d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d450aca44b424d0d0c6620a2a9497998bacdd5a3d4c7df031eb4e9c427d12e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.728212 kubelet[3539]: E1105 23:44:33.728165 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d450aca44b424d0d0c6620a2a9497998bacdd5a3d4c7df031eb4e9c427d12e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.728212 kubelet[3539]: E1105 23:44:33.728208 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d450aca44b424d0d0c6620a2a9497998bacdd5a3d4c7df031eb4e9c427d12e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vf7rk" Nov 5 23:44:33.728396 kubelet[3539]: E1105 23:44:33.728223 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d450aca44b424d0d0c6620a2a9497998bacdd5a3d4c7df031eb4e9c427d12e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vf7rk" Nov 5 23:44:33.728396 kubelet[3539]: E1105 23:44:33.728263 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vf7rk_kube-system(1a6cbd4a-e5b2-4ca3-8269-d7430053336d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vf7rk_kube-system(1a6cbd4a-e5b2-4ca3-8269-d7430053336d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32d450aca44b424d0d0c6620a2a9497998bacdd5a3d4c7df031eb4e9c427d12e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vf7rk" podUID="1a6cbd4a-e5b2-4ca3-8269-d7430053336d" Nov 5 23:44:33.731684 containerd[1909]: time="2025-11-05T23:44:33.731646550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-kpv2j,Uid:a68e46b0-801c-4548-82e3-d2eb8a4bb9ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ccb51a9c3b71c5f95ddca6196a5766b1bcbd6cab45b20dcdd23830e66dbfc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.731915 kubelet[3539]: E1105 23:44:33.731875 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ccb51a9c3b71c5f95ddca6196a5766b1bcbd6cab45b20dcdd23830e66dbfc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:33.732043 kubelet[3539]: E1105 23:44:33.731986 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ccb51a9c3b71c5f95ddca6196a5766b1bcbd6cab45b20dcdd23830e66dbfc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" Nov 5 23:44:33.732043 kubelet[3539]: E1105 23:44:33.732004 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ccb51a9c3b71c5f95ddca6196a5766b1bcbd6cab45b20dcdd23830e66dbfc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" Nov 5 23:44:33.732238 kubelet[3539]: E1105 23:44:33.732217 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66ccb51a9c3b71c5f95ddca6196a5766b1bcbd6cab45b20dcdd23830e66dbfc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:44:43.635857 containerd[1909]: time="2025-11-05T23:44:43.635519392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7769f64cbc-fbmnx,Uid:1177c853-8b74-4ffe-9eed-6c7edaf39ab6,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:44.126306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37734377.mount: Deactivated successfully. Nov 5 23:44:44.635972 containerd[1909]: time="2025-11-05T23:44:44.635748337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bf584d79-5fnzf,Uid:d5bad4f5-f9c2-41af-9172-91bd2034e4ca,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:44.635972 containerd[1909]: time="2025-11-05T23:44:44.635915823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-kpv2j,Uid:a68e46b0-801c-4548-82e3-d2eb8a4bb9ed,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:44.636556 containerd[1909]: time="2025-11-05T23:44:44.636400254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b68578ff-w27fl,Uid:9f589f34-97ee-4d82-b7d6-bdd22dcbc743,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:45.311616 containerd[1909]: time="2025-11-05T23:44:45.311402229Z" level=error msg="Failed to destroy network for sandbox \"9baa98438e5106d7801b9c9d10a924b67d9da671f2ccddb3cf4221f8375c99dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:45.314253 systemd[1]: run-netns-cni\x2d9056e08c\x2d28ed\x2d7c60\x2de21b\x2d7c88c366ca73.mount: Deactivated successfully. Nov 5 23:44:45.637751 containerd[1909]: time="2025-11-05T23:44:45.637272101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf7rk,Uid:1a6cbd4a-e5b2-4ca3-8269-d7430053336d,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:45.637751 containerd[1909]: time="2025-11-05T23:44:45.637531697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpftq,Uid:427c3b5f-d7ee-4425-8185-ed4318a97b1f,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:46.635379 containerd[1909]: time="2025-11-05T23:44:46.635337179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-625lj,Uid:486c3bf3-5c4f-4ba5-b692-994994d35c51,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:46.635522 containerd[1909]: time="2025-11-05T23:44:46.635337171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f9tn,Uid:4d92b57a-8924-49e1-8363-5659eabb3319,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:48.635413 containerd[1909]: time="2025-11-05T23:44:48.635346251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g8mz5,Uid:ffd109d6-81d2-474d-9a1e-5493102832d2,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:50.128798 containerd[1909]: time="2025-11-05T23:44:50.128683760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7769f64cbc-fbmnx,Uid:1177c853-8b74-4ffe-9eed-6c7edaf39ab6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baa98438e5106d7801b9c9d10a924b67d9da671f2ccddb3cf4221f8375c99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.129557 kubelet[3539]: E1105 23:44:50.128978 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baa98438e5106d7801b9c9d10a924b67d9da671f2ccddb3cf4221f8375c99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.129557 kubelet[3539]: E1105 23:44:50.129043 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baa98438e5106d7801b9c9d10a924b67d9da671f2ccddb3cf4221f8375c99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" Nov 5 23:44:50.129557 kubelet[3539]: E1105 23:44:50.129057 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baa98438e5106d7801b9c9d10a924b67d9da671f2ccddb3cf4221f8375c99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" Nov 5 23:44:50.129893 kubelet[3539]: E1105 23:44:50.129401 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9baa98438e5106d7801b9c9d10a924b67d9da671f2ccddb3cf4221f8375c99dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:44:50.448771 containerd[1909]: time="2025-11-05T23:44:50.448596128Z" level=error msg="Failed to destroy network for sandbox \"c8faafd9e329bb5aedb971de9d9549ba6ae3eddb051b75eed2e6d65abfd8b479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.508491 containerd[1909]: time="2025-11-05T23:44:50.508383648Z" level=error msg="Failed to destroy network for sandbox \"330a383a98729dcac4c5d4a9038a6e7cbc78e79192e1a528d845dafdc264ad14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.565006 containerd[1909]: time="2025-11-05T23:44:50.564956652Z" level=error msg="Failed to destroy network for sandbox \"e4d4d78767b58df15a3e89ecb84c739860599f4eeab351987993157d38de7e80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.612595 containerd[1909]: time="2025-11-05T23:44:50.612460497Z" level=error msg="Failed to destroy network for sandbox \"8e968bd48462e12729af2a36c3e6d02aa6d068bd8c840875b6e8d7681215072b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.705918 containerd[1909]: time="2025-11-05T23:44:50.705799862Z" level=error msg="Failed to destroy network for sandbox \"b04bbb9aa9308e8ce8c53edb75b4ffa96a0ba8fcaf9cc7c41dbb54f60d86876f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.751321 containerd[1909]: time="2025-11-05T23:44:50.751202185Z" level=error msg="Failed to destroy network for sandbox \"0ea433c43809b59b472fc45f8c8ebf19b85c82bc1a958f4d144032d5ea20b148\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.816936 containerd[1909]: time="2025-11-05T23:44:50.816886047Z" level=error msg="Failed to destroy network for sandbox \"b6fba0fc465845e833efaf6c0f226565845dee35e3e9e953979591340b5046d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.902846 containerd[1909]: time="2025-11-05T23:44:50.902784607Z" level=error msg="Failed to destroy network for sandbox \"ba63232d2a3c8cd2baec78a9b6970788ffa82b87c52cd0157cf67fad87163284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.918517 containerd[1909]: time="2025-11-05T23:44:50.918397033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:50.921600 containerd[1909]: time="2025-11-05T23:44:50.921492681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bf584d79-5fnzf,Uid:d5bad4f5-f9c2-41af-9172-91bd2034e4ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8faafd9e329bb5aedb971de9d9549ba6ae3eddb051b75eed2e6d65abfd8b479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.922013 kubelet[3539]: E1105 23:44:50.921901 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8faafd9e329bb5aedb971de9d9549ba6ae3eddb051b75eed2e6d65abfd8b479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.922013 kubelet[3539]: E1105 23:44:50.921981 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8faafd9e329bb5aedb971de9d9549ba6ae3eddb051b75eed2e6d65abfd8b479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:50.922182 kubelet[3539]: E1105 23:44:50.922110 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8faafd9e329bb5aedb971de9d9549ba6ae3eddb051b75eed2e6d65abfd8b479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57bf584d79-5fnzf" Nov 5 23:44:50.922293 kubelet[3539]: E1105 23:44:50.922254 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57bf584d79-5fnzf_calico-system(d5bad4f5-f9c2-41af-9172-91bd2034e4ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57bf584d79-5fnzf_calico-system(d5bad4f5-f9c2-41af-9172-91bd2034e4ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8faafd9e329bb5aedb971de9d9549ba6ae3eddb051b75eed2e6d65abfd8b479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57bf584d79-5fnzf" podUID="d5bad4f5-f9c2-41af-9172-91bd2034e4ca" Nov 5 23:44:50.978606 systemd[1]: run-netns-cni\x2df62facc0\x2d4a82\x2d02d3\x2dd7dd\x2d059737421a5d.mount: Deactivated successfully. Nov 5 23:44:50.978681 systemd[1]: run-netns-cni\x2d3f7612bc\x2dad44\x2d1e33\x2d6468\x2dbc4549ce6721.mount: Deactivated successfully. Nov 5 23:44:50.978961 systemd[1]: run-netns-cni\x2d42f44501\x2d4908\x2d70e8\x2df812\x2daa5dd7a2e5e5.mount: Deactivated successfully. Nov 5 23:44:50.978998 systemd[1]: run-netns-cni\x2d6f6a8eea\x2d0f06\x2d5444\x2d1146\x2d3da9d4be80cf.mount: Deactivated successfully. Nov 5 23:44:50.979030 systemd[1]: run-netns-cni\x2daa02a68b\x2d8699\x2da2f4\x2d9791\x2d8b32bc55420f.mount: Deactivated successfully. Nov 5 23:44:50.982370 containerd[1909]: time="2025-11-05T23:44:50.982268543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-kpv2j,Uid:a68e46b0-801c-4548-82e3-d2eb8a4bb9ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a383a98729dcac4c5d4a9038a6e7cbc78e79192e1a528d845dafdc264ad14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.982737 kubelet[3539]: E1105 23:44:50.982687 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a383a98729dcac4c5d4a9038a6e7cbc78e79192e1a528d845dafdc264ad14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:50.982841 kubelet[3539]: E1105 23:44:50.982751 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a383a98729dcac4c5d4a9038a6e7cbc78e79192e1a528d845dafdc264ad14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" Nov 5 23:44:50.982841 kubelet[3539]: E1105 23:44:50.982778 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a383a98729dcac4c5d4a9038a6e7cbc78e79192e1a528d845dafdc264ad14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" Nov 5 23:44:50.982841 kubelet[3539]: E1105 23:44:50.982824 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"330a383a98729dcac4c5d4a9038a6e7cbc78e79192e1a528d845dafdc264ad14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:44:51.027745 containerd[1909]: time="2025-11-05T23:44:51.027680107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b68578ff-w27fl,Uid:9f589f34-97ee-4d82-b7d6-bdd22dcbc743,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4d78767b58df15a3e89ecb84c739860599f4eeab351987993157d38de7e80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.028130 kubelet[3539]: E1105 23:44:51.028078 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4d78767b58df15a3e89ecb84c739860599f4eeab351987993157d38de7e80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.028207 kubelet[3539]: E1105 23:44:51.028142 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4d78767b58df15a3e89ecb84c739860599f4eeab351987993157d38de7e80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" Nov 5 23:44:51.028207 kubelet[3539]: E1105 23:44:51.028159 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d4d78767b58df15a3e89ecb84c739860599f4eeab351987993157d38de7e80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" Nov 5 23:44:51.028273 kubelet[3539]: E1105 23:44:51.028205 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d4d78767b58df15a3e89ecb84c739860599f4eeab351987993157d38de7e80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:44:51.074506 containerd[1909]: time="2025-11-05T23:44:51.074446248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf7rk,Uid:1a6cbd4a-e5b2-4ca3-8269-d7430053336d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e968bd48462e12729af2a36c3e6d02aa6d068bd8c840875b6e8d7681215072b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.074819 kubelet[3539]: E1105 23:44:51.074756 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e968bd48462e12729af2a36c3e6d02aa6d068bd8c840875b6e8d7681215072b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.074877 kubelet[3539]: E1105 23:44:51.074844 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e968bd48462e12729af2a36c3e6d02aa6d068bd8c840875b6e8d7681215072b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vf7rk" Nov 5 23:44:51.074877 kubelet[3539]: E1105 23:44:51.074861 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e968bd48462e12729af2a36c3e6d02aa6d068bd8c840875b6e8d7681215072b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vf7rk" Nov 5 23:44:51.074938 kubelet[3539]: E1105 23:44:51.074916 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vf7rk_kube-system(1a6cbd4a-e5b2-4ca3-8269-d7430053336d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vf7rk_kube-system(1a6cbd4a-e5b2-4ca3-8269-d7430053336d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e968bd48462e12729af2a36c3e6d02aa6d068bd8c840875b6e8d7681215072b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vf7rk" podUID="1a6cbd4a-e5b2-4ca3-8269-d7430053336d" Nov 5 23:44:51.120249 containerd[1909]: time="2025-11-05T23:44:51.120161885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpftq,Uid:427c3b5f-d7ee-4425-8185-ed4318a97b1f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04bbb9aa9308e8ce8c53edb75b4ffa96a0ba8fcaf9cc7c41dbb54f60d86876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.120454 kubelet[3539]: E1105 23:44:51.120419 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04bbb9aa9308e8ce8c53edb75b4ffa96a0ba8fcaf9cc7c41dbb54f60d86876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.120499 kubelet[3539]: E1105 23:44:51.120479 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04bbb9aa9308e8ce8c53edb75b4ffa96a0ba8fcaf9cc7c41dbb54f60d86876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:51.120499 kubelet[3539]: E1105 23:44:51.120499 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04bbb9aa9308e8ce8c53edb75b4ffa96a0ba8fcaf9cc7c41dbb54f60d86876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fpftq" Nov 5 23:44:51.120822 kubelet[3539]: E1105 23:44:51.120551 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b04bbb9aa9308e8ce8c53edb75b4ffa96a0ba8fcaf9cc7c41dbb54f60d86876f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:44:51.122942 containerd[1909]: time="2025-11-05T23:44:51.122880801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-625lj,Uid:486c3bf3-5c4f-4ba5-b692-994994d35c51,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea433c43809b59b472fc45f8c8ebf19b85c82bc1a958f4d144032d5ea20b148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.123220 kubelet[3539]: E1105 23:44:51.123156 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea433c43809b59b472fc45f8c8ebf19b85c82bc1a958f4d144032d5ea20b148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.123220 kubelet[3539]: E1105 23:44:51.123195 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea433c43809b59b472fc45f8c8ebf19b85c82bc1a958f4d144032d5ea20b148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" Nov 5 23:44:51.123381 kubelet[3539]: E1105 23:44:51.123208 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea433c43809b59b472fc45f8c8ebf19b85c82bc1a958f4d144032d5ea20b148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" Nov 5 23:44:51.123381 kubelet[3539]: E1105 23:44:51.123349 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ea433c43809b59b472fc45f8c8ebf19b85c82bc1a958f4d144032d5ea20b148\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:44:51.168668 containerd[1909]: time="2025-11-05T23:44:51.168600118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f9tn,Uid:4d92b57a-8924-49e1-8363-5659eabb3319,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba0fc465845e833efaf6c0f226565845dee35e3e9e953979591340b5046d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.169231 kubelet[3539]: E1105 23:44:51.169168 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba0fc465845e833efaf6c0f226565845dee35e3e9e953979591340b5046d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.169512 kubelet[3539]: E1105 23:44:51.169253 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba0fc465845e833efaf6c0f226565845dee35e3e9e953979591340b5046d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4f9tn" Nov 5 23:44:51.169512 kubelet[3539]: E1105 23:44:51.169279 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba0fc465845e833efaf6c0f226565845dee35e3e9e953979591340b5046d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4f9tn" Nov 5 23:44:51.169512 kubelet[3539]: E1105 23:44:51.169323 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4f9tn_kube-system(4d92b57a-8924-49e1-8363-5659eabb3319)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4f9tn_kube-system(4d92b57a-8924-49e1-8363-5659eabb3319)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6fba0fc465845e833efaf6c0f226565845dee35e3e9e953979591340b5046d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4f9tn" podUID="4d92b57a-8924-49e1-8363-5659eabb3319" Nov 5 23:44:51.229907 containerd[1909]: time="2025-11-05T23:44:51.229764409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g8mz5,Uid:ffd109d6-81d2-474d-9a1e-5493102832d2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba63232d2a3c8cd2baec78a9b6970788ffa82b87c52cd0157cf67fad87163284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.230714 kubelet[3539]: E1105 23:44:51.230217 3539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba63232d2a3c8cd2baec78a9b6970788ffa82b87c52cd0157cf67fad87163284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:51.230819 kubelet[3539]: E1105 23:44:51.230724 3539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba63232d2a3c8cd2baec78a9b6970788ffa82b87c52cd0157cf67fad87163284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:51.230819 kubelet[3539]: E1105 23:44:51.230741 3539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba63232d2a3c8cd2baec78a9b6970788ffa82b87c52cd0157cf67fad87163284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g8mz5" Nov 5 23:44:51.230819 kubelet[3539]: E1105 23:44:51.230784 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba63232d2a3c8cd2baec78a9b6970788ffa82b87c52cd0157cf67fad87163284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:44:51.278488 containerd[1909]: time="2025-11-05T23:44:51.278275108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 23:44:51.281609 containerd[1909]: time="2025-11-05T23:44:51.281451894Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:51.323223 containerd[1909]: time="2025-11-05T23:44:51.323153479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:51.323594 containerd[1909]: time="2025-11-05T23:44:51.323518355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 18.393896646s" Nov 5 23:44:51.323594 containerd[1909]: time="2025-11-05T23:44:51.323548100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 23:44:51.371946 containerd[1909]: time="2025-11-05T23:44:51.371898194Z" level=info msg="CreateContainer within sandbox \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 23:44:51.633797 containerd[1909]: time="2025-11-05T23:44:51.633654037Z" level=info msg="Container 0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:51.636080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948717564.mount: Deactivated successfully. Nov 5 23:44:51.725685 containerd[1909]: time="2025-11-05T23:44:51.725638249Z" level=info msg="CreateContainer within sandbox \"92501e672b941a92944c29c6c44795a79e93295dac7107471a1bcc3da10bd727\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\"" Nov 5 23:44:51.726487 containerd[1909]: time="2025-11-05T23:44:51.726209971Z" level=info msg="StartContainer for \"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\"" Nov 5 23:44:51.727906 containerd[1909]: time="2025-11-05T23:44:51.727880998Z" level=info msg="connecting to shim 0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124" address="unix:///run/containerd/s/9db034db6372c80fdf55d9d1005371b9b1d72ecbb92ccb58af61a56a82637ec1" protocol=ttrpc version=3 Nov 5 23:44:51.749726 systemd[1]: Started cri-containerd-0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124.scope - libcontainer container 0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124. Nov 5 23:44:51.786106 containerd[1909]: time="2025-11-05T23:44:51.786053043Z" level=info msg="StartContainer for \"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\" returns successfully" Nov 5 23:44:51.835235 kubelet[3539]: I1105 23:44:51.835177 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z5pzf" podStartSLOduration=1.029039457 podStartE2EDuration="47.835162081s" podCreationTimestamp="2025-11-05 23:44:04 +0000 UTC" firstStartedPulling="2025-11-05 23:44:04.518480828 +0000 UTC m=+26.976835583" lastFinishedPulling="2025-11-05 23:44:51.32460346 +0000 UTC m=+73.782958207" observedRunningTime="2025-11-05 23:44:51.834171106 +0000 UTC m=+74.292525869" watchObservedRunningTime="2025-11-05 23:44:51.835162081 +0000 UTC m=+74.293516836" Nov 5 23:44:52.326271 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 23:44:52.326726 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 23:44:52.494301 kubelet[3539]: I1105 23:44:52.494068 3539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-backend-key-pair\") pod \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\" (UID: \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\") " Nov 5 23:44:52.495964 kubelet[3539]: I1105 23:44:52.494410 3539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf6k\" (UniqueName: \"kubernetes.io/projected/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-kube-api-access-lzf6k\") pod \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\" (UID: \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\") " Nov 5 23:44:52.496362 kubelet[3539]: I1105 23:44:52.496199 3539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d5bad4f5-f9c2-41af-9172-91bd2034e4ca" (UID: "d5bad4f5-f9c2-41af-9172-91bd2034e4ca"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 23:44:52.496362 kubelet[3539]: I1105 23:44:52.496245 3539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-ca-bundle\") pod \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\" (UID: \"d5bad4f5-f9c2-41af-9172-91bd2034e4ca\") " Nov 5 23:44:52.496362 kubelet[3539]: I1105 23:44:52.496340 3539 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-ca-bundle\") on node \"ci-4459.1.0-n-7f88f0cba0\" DevicePath \"\"" Nov 5 23:44:52.498474 systemd[1]: var-lib-kubelet-pods-d5bad4f5\x2df9c2\x2d41af\x2d9172\x2d91bd2034e4ca-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 23:44:52.503207 kubelet[3539]: I1105 23:44:52.502925 3539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d5bad4f5-f9c2-41af-9172-91bd2034e4ca" (UID: "d5bad4f5-f9c2-41af-9172-91bd2034e4ca"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 23:44:52.510680 kubelet[3539]: I1105 23:44:52.510635 3539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-kube-api-access-lzf6k" (OuterVolumeSpecName: "kube-api-access-lzf6k") pod "d5bad4f5-f9c2-41af-9172-91bd2034e4ca" (UID: "d5bad4f5-f9c2-41af-9172-91bd2034e4ca"). InnerVolumeSpecName "kube-api-access-lzf6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 23:44:52.510721 systemd[1]: var-lib-kubelet-pods-d5bad4f5\x2df9c2\x2d41af\x2d9172\x2d91bd2034e4ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzf6k.mount: Deactivated successfully. Nov 5 23:44:52.597630 kubelet[3539]: I1105 23:44:52.597496 3539 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-7f88f0cba0\" DevicePath \"\"" Nov 5 23:44:52.597630 kubelet[3539]: I1105 23:44:52.597530 3539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lzf6k\" (UniqueName: \"kubernetes.io/projected/d5bad4f5-f9c2-41af-9172-91bd2034e4ca-kube-api-access-lzf6k\") on node \"ci-4459.1.0-n-7f88f0cba0\" DevicePath \"\"" Nov 5 23:44:52.819890 systemd[1]: Removed slice kubepods-besteffort-podd5bad4f5_f9c2_41af_9172_91bd2034e4ca.slice - libcontainer container kubepods-besteffort-podd5bad4f5_f9c2_41af_9172_91bd2034e4ca.slice. Nov 5 23:44:55.830676 kubelet[3539]: I1105 23:44:55.830643 3539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5bad4f5-f9c2-41af-9172-91bd2034e4ca" path="/var/lib/kubelet/pods/d5bad4f5-f9c2-41af-9172-91bd2034e4ca/volumes" Nov 5 23:44:55.837496 systemd[1]: Created slice kubepods-besteffort-pod4aba500c_946b_4268_bb47_e30c6e97daba.slice - libcontainer container kubepods-besteffort-pod4aba500c_946b_4268_bb47_e30c6e97daba.slice. Nov 5 23:44:55.917999 kubelet[3539]: I1105 23:44:55.917508 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aba500c-946b-4268-bb47-e30c6e97daba-whisker-ca-bundle\") pod \"whisker-6856bc974c-ffszr\" (UID: \"4aba500c-946b-4268-bb47-e30c6e97daba\") " pod="calico-system/whisker-6856bc974c-ffszr" Nov 5 23:44:55.917999 kubelet[3539]: I1105 23:44:55.917555 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4aba500c-946b-4268-bb47-e30c6e97daba-whisker-backend-key-pair\") pod \"whisker-6856bc974c-ffszr\" (UID: \"4aba500c-946b-4268-bb47-e30c6e97daba\") " pod="calico-system/whisker-6856bc974c-ffszr" Nov 5 23:44:55.918262 kubelet[3539]: I1105 23:44:55.917567 3539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgq6m\" (UniqueName: \"kubernetes.io/projected/4aba500c-946b-4268-bb47-e30c6e97daba-kube-api-access-pgq6m\") pod \"whisker-6856bc974c-ffszr\" (UID: \"4aba500c-946b-4268-bb47-e30c6e97daba\") " pod="calico-system/whisker-6856bc974c-ffszr" Nov 5 23:44:56.141128 containerd[1909]: time="2025-11-05T23:44:56.141011051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6856bc974c-ffszr,Uid:4aba500c-946b-4268-bb47-e30c6e97daba,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:57.336765 systemd-networkd[1479]: vxlan.calico: Link UP Nov 5 23:44:57.337167 systemd-networkd[1479]: vxlan.calico: Gained carrier Nov 5 23:44:57.846271 systemd-networkd[1479]: calib5ef81da323: Link UP Nov 5 23:44:57.846380 systemd-networkd[1479]: calib5ef81da323: Gained carrier Nov 5 23:44:57.865600 containerd[1909]: 2025-11-05 23:44:57.783 [INFO][5092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0 whisker-6856bc974c- calico-system 4aba500c-946b-4268-bb47-e30c6e97daba 992 0 2025-11-05 23:44:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6856bc974c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 whisker-6856bc974c-ffszr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib5ef81da323 [] [] }} ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-" Nov 5 23:44:57.865600 containerd[1909]: 2025-11-05 23:44:57.784 [INFO][5092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:57.865600 containerd[1909]: 2025-11-05 23:44:57.805 [INFO][5105] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" HandleID="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.805 [INFO][5105] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" HandleID="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"whisker-6856bc974c-ffszr", "timestamp":"2025-11-05 23:44:57.805702283 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.805 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.805 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.805 [INFO][5105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.813 [INFO][5105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.816 [INFO][5105] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.820 [INFO][5105] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.822 [INFO][5105] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866487 containerd[1909]: 2025-11-05 23:44:57.824 [INFO][5105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.824 [INFO][5105] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.825 [INFO][5105] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.829 [INFO][5105] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.838 [INFO][5105] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.129/26] block=192.168.92.128/26 handle="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.838 [INFO][5105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.129/26] handle="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.838 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:44:57.866909 containerd[1909]: 2025-11-05 23:44:57.838 [INFO][5105] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.129/26] IPv6=[] ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" HandleID="k8s-pod-network.cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:57.867742 containerd[1909]: 2025-11-05 23:44:57.842 [INFO][5092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0", GenerateName:"whisker-6856bc974c-", Namespace:"calico-system", SelfLink:"", UID:"4aba500c-946b-4268-bb47-e30c6e97daba", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6856bc974c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"whisker-6856bc974c-ffszr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib5ef81da323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:44:57.867742 containerd[1909]: 2025-11-05 23:44:57.842 [INFO][5092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.129/32] ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:57.867805 containerd[1909]: 2025-11-05 23:44:57.842 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5ef81da323 ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:57.867805 containerd[1909]: 2025-11-05 23:44:57.845 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:57.867835 containerd[1909]: 2025-11-05 23:44:57.847 [INFO][5092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0", GenerateName:"whisker-6856bc974c-", Namespace:"calico-system", SelfLink:"", UID:"4aba500c-946b-4268-bb47-e30c6e97daba", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6856bc974c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b", Pod:"whisker-6856bc974c-ffszr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib5ef81da323", MAC:"56:8b:90:8e:05:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:44:57.867867 containerd[1909]: 2025-11-05 23:44:57.862 [INFO][5092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" Namespace="calico-system" Pod="whisker-6856bc974c-ffszr" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-whisker--6856bc974c--ffszr-eth0" Nov 5 23:44:59.244799 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Nov 5 23:44:59.308795 systemd-networkd[1479]: calib5ef81da323: Gained IPv6LL Nov 5 23:45:01.635833 containerd[1909]: time="2025-11-05T23:45:01.635793982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g8mz5,Uid:ffd109d6-81d2-474d-9a1e-5493102832d2,Namespace:calico-system,Attempt:0,}" Nov 5 23:45:02.635410 containerd[1909]: time="2025-11-05T23:45:02.635367069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf7rk,Uid:1a6cbd4a-e5b2-4ca3-8269-d7430053336d,Namespace:kube-system,Attempt:0,}" Nov 5 23:45:02.743610 containerd[1909]: time="2025-11-05T23:45:02.742075175Z" level=info msg="connecting to shim cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b" address="unix:///run/containerd/s/d1e15f138d0424de62879db37e7c9b5cfc15783f40b88884f16676d4b2b7db1f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:02.794742 systemd[1]: Started cri-containerd-cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b.scope - libcontainer container cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b. Nov 5 23:45:02.874289 containerd[1909]: time="2025-11-05T23:45:02.874240893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6856bc974c-ffszr,Uid:4aba500c-946b-4268-bb47-e30c6e97daba,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf33c03941cca0034d91d3c97bbb6d49e3c2a060db350355cce7297b9625072b\"" Nov 5 23:45:02.881261 containerd[1909]: time="2025-11-05T23:45:02.881202199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:45:02.930352 systemd-networkd[1479]: calia6481553ef6: Link UP Nov 5 23:45:02.932001 systemd-networkd[1479]: calia6481553ef6: Gained carrier Nov 5 23:45:02.952522 containerd[1909]: 2025-11-05 23:45:02.844 [INFO][5176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0 coredns-674b8bbfcf- kube-system 1a6cbd4a-e5b2-4ca3-8269-d7430053336d 886 0 2025-11-05 23:43:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 coredns-674b8bbfcf-vf7rk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6481553ef6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-" Nov 5 23:45:02.952522 containerd[1909]: 2025-11-05 23:45:02.845 [INFO][5176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.952522 containerd[1909]: 2025-11-05 23:45:02.888 [INFO][5217] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" HandleID="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.888 [INFO][5217] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" HandleID="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"coredns-674b8bbfcf-vf7rk", "timestamp":"2025-11-05 23:45:02.888503075 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.888 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.889 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.889 [INFO][5217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.897 [INFO][5217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.900 [INFO][5217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.903 [INFO][5217] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.905 [INFO][5217] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952779 containerd[1909]: 2025-11-05 23:45:02.906 [INFO][5217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.906 [INFO][5217] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.907 [INFO][5217] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.912 [INFO][5217] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.922 [INFO][5217] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.130/26] block=192.168.92.128/26 handle="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.922 [INFO][5217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.130/26] handle="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.922 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:02.952923 containerd[1909]: 2025-11-05 23:45:02.922 [INFO][5217] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.130/26] IPv6=[] ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" HandleID="k8s-pod-network.d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.953030 containerd[1909]: 2025-11-05 23:45:02.925 [INFO][5176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1a6cbd4a-e5b2-4ca3-8269-d7430053336d", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"coredns-674b8bbfcf-vf7rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6481553ef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:02.953030 containerd[1909]: 2025-11-05 23:45:02.925 [INFO][5176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.130/32] ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.953030 containerd[1909]: 2025-11-05 23:45:02.926 [INFO][5176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6481553ef6 ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.953030 containerd[1909]: 2025-11-05 23:45:02.932 [INFO][5176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.953030 containerd[1909]: 2025-11-05 23:45:02.932 [INFO][5176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1a6cbd4a-e5b2-4ca3-8269-d7430053336d", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab", Pod:"coredns-674b8bbfcf-vf7rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6481553ef6", MAC:"82:93:20:1b:e5:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:02.953030 containerd[1909]: 2025-11-05 23:45:02.950 [INFO][5176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-vf7rk" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--vf7rk-eth0" Nov 5 23:45:02.997600 containerd[1909]: time="2025-11-05T23:45:02.997376372Z" level=info msg="connecting to shim d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab" address="unix:///run/containerd/s/89baf636ad43da0dbb3b122ff6b0f01610fcef7d2baf7d59456ff15c8c3e626c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:03.031635 systemd[1]: Started cri-containerd-d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab.scope - libcontainer container d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab. Nov 5 23:45:03.054091 systemd-networkd[1479]: cali5a19c75dafa: Link UP Nov 5 23:45:03.055796 systemd-networkd[1479]: cali5a19c75dafa: Gained carrier Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.847 [INFO][5162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0 goldmane-666569f655- calico-system ffd109d6-81d2-474d-9a1e-5493102832d2 881 0 2025-11-05 23:44:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 goldmane-666569f655-g8mz5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5a19c75dafa [] [] }} ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.848 [INFO][5162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.897 [INFO][5215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" HandleID="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.898 [INFO][5215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" HandleID="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"goldmane-666569f655-g8mz5", "timestamp":"2025-11-05 23:45:02.897257265 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.898 [INFO][5215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.923 [INFO][5215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.923 [INFO][5215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:02.999 [INFO][5215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.009 [INFO][5215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.014 [INFO][5215] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.016 [INFO][5215] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.019 [INFO][5215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.019 [INFO][5215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.021 [INFO][5215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6 Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.029 [INFO][5215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.047 [INFO][5215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.131/26] block=192.168.92.128/26 handle="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.047 [INFO][5215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.131/26] handle="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.047 [INFO][5215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:03.078481 containerd[1909]: 2025-11-05 23:45:03.047 [INFO][5215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.131/26] IPv6=[] ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" HandleID="k8s-pod-network.a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.079665 containerd[1909]: 2025-11-05 23:45:03.050 [INFO][5162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ffd109d6-81d2-474d-9a1e-5493102832d2", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"goldmane-666569f655-g8mz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a19c75dafa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:03.079665 containerd[1909]: 2025-11-05 23:45:03.051 [INFO][5162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.131/32] ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.079665 containerd[1909]: 2025-11-05 23:45:03.051 [INFO][5162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a19c75dafa ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.079665 containerd[1909]: 2025-11-05 23:45:03.056 [INFO][5162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.079665 containerd[1909]: 2025-11-05 23:45:03.056 [INFO][5162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ffd109d6-81d2-474d-9a1e-5493102832d2", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6", Pod:"goldmane-666569f655-g8mz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a19c75dafa", MAC:"c6:9f:e4:c0:26:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:03.079665 containerd[1909]: 2025-11-05 23:45:03.071 [INFO][5162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" Namespace="calico-system" Pod="goldmane-666569f655-g8mz5" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-goldmane--666569f655--g8mz5-eth0" Nov 5 23:45:03.089974 containerd[1909]: time="2025-11-05T23:45:03.089907883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vf7rk,Uid:1a6cbd4a-e5b2-4ca3-8269-d7430053336d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab\"" Nov 5 23:45:03.099827 containerd[1909]: time="2025-11-05T23:45:03.099788890Z" level=info msg="CreateContainer within sandbox \"d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:45:03.129837 containerd[1909]: time="2025-11-05T23:45:03.129723615Z" level=info msg="connecting to shim a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6" address="unix:///run/containerd/s/bc8c569a85bff52408762435e678fa1760f69871bff3e8282d14f3707b576841" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:03.132631 containerd[1909]: time="2025-11-05T23:45:03.132598907Z" level=info msg="Container 6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:45:03.149634 containerd[1909]: time="2025-11-05T23:45:03.149550167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:03.149853 systemd[1]: Started cri-containerd-a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6.scope - libcontainer container a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6. Nov 5 23:45:03.155937 containerd[1909]: time="2025-11-05T23:45:03.155900991Z" level=info msg="CreateContainer within sandbox \"d73cf56bd9a94c85cb6566f53764c0a2975dfb2b32ca267f0da64823b8a810ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6\"" Nov 5 23:45:03.157135 containerd[1909]: time="2025-11-05T23:45:03.156862539Z" level=info msg="StartContainer for \"6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6\"" Nov 5 23:45:03.158457 containerd[1909]: time="2025-11-05T23:45:03.158348934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:45:03.159270 containerd[1909]: time="2025-11-05T23:45:03.158398088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:45:03.159270 containerd[1909]: time="2025-11-05T23:45:03.158899606Z" level=info msg="connecting to shim 6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6" address="unix:///run/containerd/s/89baf636ad43da0dbb3b122ff6b0f01610fcef7d2baf7d59456ff15c8c3e626c" protocol=ttrpc version=3 Nov 5 23:45:03.159848 kubelet[3539]: E1105 23:45:03.159719 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:03.161060 kubelet[3539]: E1105 23:45:03.160809 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:03.170488 kubelet[3539]: E1105 23:45:03.170142 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:034e9f0b530a4b44bd7c28fc81170f33,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:03.175932 containerd[1909]: time="2025-11-05T23:45:03.175901052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:45:03.185745 systemd[1]: Started cri-containerd-6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6.scope - libcontainer container 6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6. Nov 5 23:45:03.219233 containerd[1909]: time="2025-11-05T23:45:03.218882948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g8mz5,Uid:ffd109d6-81d2-474d-9a1e-5493102832d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6ba73c1d3c8ba01af017e452342b2d2861ae033a15b8097a2328ca4181917e6\"" Nov 5 23:45:03.243520 containerd[1909]: time="2025-11-05T23:45:03.242937878Z" level=info msg="StartContainer for \"6ad3d2f89fb41a940b8bf8bc08af38a054df8ec2ebb0ff4ac1eb374c00aad5b6\" returns successfully" Nov 5 23:45:03.456190 containerd[1909]: time="2025-11-05T23:45:03.456061699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:03.463330 containerd[1909]: time="2025-11-05T23:45:03.463213426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:45:03.463330 containerd[1909]: time="2025-11-05T23:45:03.463307517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:03.464081 kubelet[3539]: E1105 23:45:03.463620 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:03.464081 kubelet[3539]: E1105 23:45:03.463670 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:03.464178 kubelet[3539]: E1105 23:45:03.463817 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:03.464537 containerd[1909]: time="2025-11-05T23:45:03.464514512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:45:03.465862 kubelet[3539]: E1105 23:45:03.465823 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:03.636359 containerd[1909]: time="2025-11-05T23:45:03.636322549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-625lj,Uid:486c3bf3-5c4f-4ba5-b692-994994d35c51,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:45:03.726702 systemd-networkd[1479]: caliceac6dae664: Link UP Nov 5 23:45:03.728191 systemd-networkd[1479]: caliceac6dae664: Gained carrier Nov 5 23:45:03.752465 containerd[1909]: time="2025-11-05T23:45:03.752363998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.672 [INFO][5381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0 calico-apiserver-c6d9d55f- calico-apiserver 486c3bf3-5c4f-4ba5-b692-994994d35c51 880 0 2025-11-05 23:43:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6d9d55f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 calico-apiserver-c6d9d55f-625lj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliceac6dae664 [] [] }} ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.672 [INFO][5381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.689 [INFO][5392] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" HandleID="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.689 [INFO][5392] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" HandleID="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"calico-apiserver-c6d9d55f-625lj", "timestamp":"2025-11-05 23:45:03.689620712 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.689 [INFO][5392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.689 [INFO][5392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.689 [INFO][5392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.695 [INFO][5392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.698 [INFO][5392] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.702 [INFO][5392] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.703 [INFO][5392] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.705 [INFO][5392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.705 [INFO][5392] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.706 [INFO][5392] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134 Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.713 [INFO][5392] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.721 [INFO][5392] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.132/26] block=192.168.92.128/26 handle="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.721 [INFO][5392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.132/26] handle="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.721 [INFO][5392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:03.753156 containerd[1909]: 2025-11-05 23:45:03.721 [INFO][5392] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.132/26] IPv6=[] ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" HandleID="k8s-pod-network.0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.753903 containerd[1909]: 2025-11-05 23:45:03.723 [INFO][5381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0", GenerateName:"calico-apiserver-c6d9d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"486c3bf3-5c4f-4ba5-b692-994994d35c51", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6d9d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"calico-apiserver-c6d9d55f-625lj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliceac6dae664", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:03.753903 containerd[1909]: 2025-11-05 23:45:03.723 [INFO][5381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.132/32] ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.753903 containerd[1909]: 2025-11-05 23:45:03.723 [INFO][5381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceac6dae664 ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.753903 containerd[1909]: 2025-11-05 23:45:03.728 [INFO][5381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.753903 containerd[1909]: 2025-11-05 23:45:03.728 [INFO][5381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0", GenerateName:"calico-apiserver-c6d9d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"486c3bf3-5c4f-4ba5-b692-994994d35c51", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6d9d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134", Pod:"calico-apiserver-c6d9d55f-625lj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliceac6dae664", MAC:"52:45:dd:23:53:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:03.753903 containerd[1909]: 2025-11-05 23:45:03.748 [INFO][5381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-625lj" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--625lj-eth0" Nov 5 23:45:03.757317 containerd[1909]: time="2025-11-05T23:45:03.757127161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:45:03.757486 containerd[1909]: time="2025-11-05T23:45:03.757445154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:03.758107 kubelet[3539]: E1105 23:45:03.757723 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:03.758107 kubelet[3539]: E1105 23:45:03.757780 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:03.758436 kubelet[3539]: E1105 23:45:03.757908 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dg9nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:03.761930 kubelet[3539]: E1105 23:45:03.761766 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:03.848885 kubelet[3539]: E1105 23:45:03.848832 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:03.851168 kubelet[3539]: E1105 23:45:03.851111 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:03.908152 kubelet[3539]: I1105 23:45:03.908061 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vf7rk" podStartSLOduration=79.907951788 podStartE2EDuration="1m19.907951788s" podCreationTimestamp="2025-11-05 23:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:45:03.887191777 +0000 UTC m=+86.345546532" watchObservedRunningTime="2025-11-05 23:45:03.907951788 +0000 UTC m=+86.366306543" Nov 5 23:45:03.985344 containerd[1909]: time="2025-11-05T23:45:03.985192551Z" level=info msg="connecting to shim 0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134" address="unix:///run/containerd/s/3015e465f326aa864ac719734827e6cc130a4eb0f309a5058e7ed634203f9d64" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:04.007754 systemd[1]: Started cri-containerd-0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134.scope - libcontainer container 0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134. Nov 5 23:45:04.237896 systemd-networkd[1479]: cali5a19c75dafa: Gained IPv6LL Nov 5 23:45:04.636866 containerd[1909]: time="2025-11-05T23:45:04.636811711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-kpv2j,Uid:a68e46b0-801c-4548-82e3-d2eb8a4bb9ed,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:45:04.637146 containerd[1909]: time="2025-11-05T23:45:04.636834728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f9tn,Uid:4d92b57a-8924-49e1-8363-5659eabb3319,Namespace:kube-system,Attempt:0,}" Nov 5 23:45:04.637146 containerd[1909]: time="2025-11-05T23:45:04.636811991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpftq,Uid:427c3b5f-d7ee-4425-8185-ed4318a97b1f,Namespace:calico-system,Attempt:0,}" Nov 5 23:45:04.858265 kubelet[3539]: E1105 23:45:04.858158 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:04.859019 kubelet[3539]: E1105 23:45:04.858971 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:04.876768 systemd-networkd[1479]: calia6481553ef6: Gained IPv6LL Nov 5 23:45:05.516751 systemd-networkd[1479]: caliceac6dae664: Gained IPv6LL Nov 5 23:45:05.636586 containerd[1909]: time="2025-11-05T23:45:05.636535347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7769f64cbc-fbmnx,Uid:1177c853-8b74-4ffe-9eed-6c7edaf39ab6,Namespace:calico-system,Attempt:0,}" Nov 5 23:45:06.075784 containerd[1909]: time="2025-11-05T23:45:06.075673594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-625lj,Uid:486c3bf3-5c4f-4ba5-b692-994994d35c51,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0b6fa27d334ce8072a4c220c544dfac9c0b4c676e648ce8ed7c66ea4ed798134\"" Nov 5 23:45:06.077743 containerd[1909]: time="2025-11-05T23:45:06.077699861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:06.613372 systemd-networkd[1479]: calicc13008cc74: Link UP Nov 5 23:45:06.613881 systemd-networkd[1479]: calicc13008cc74: Gained carrier Nov 5 23:45:06.636517 containerd[1909]: time="2025-11-05T23:45:06.636348250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b68578ff-w27fl,Uid:9f589f34-97ee-4d82-b7d6-bdd22dcbc743,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.551 [INFO][5463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0 coredns-674b8bbfcf- kube-system 4d92b57a-8924-49e1-8363-5659eabb3319 874 0 2025-11-05 23:43:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 coredns-674b8bbfcf-4f9tn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc13008cc74 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.552 [INFO][5463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.575 [INFO][5477] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" HandleID="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.576 [INFO][5477] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" HandleID="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"coredns-674b8bbfcf-4f9tn", "timestamp":"2025-11-05 23:45:06.575925391 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.576 [INFO][5477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.576 [INFO][5477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.576 [INFO][5477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.582 [INFO][5477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.586 [INFO][5477] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.589 [INFO][5477] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.590 [INFO][5477] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.592 [INFO][5477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.592 [INFO][5477] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.593 [INFO][5477] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.598 [INFO][5477] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.607 [INFO][5477] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.133/26] block=192.168.92.128/26 handle="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.607 [INFO][5477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.133/26] handle="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.607 [INFO][5477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:06.640047 containerd[1909]: 2025-11-05 23:45:06.607 [INFO][5477] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.133/26] IPv6=[] ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" HandleID="k8s-pod-network.52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.641944 containerd[1909]: 2025-11-05 23:45:06.609 [INFO][5463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d92b57a-8924-49e1-8363-5659eabb3319", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"coredns-674b8bbfcf-4f9tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc13008cc74", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:06.641944 containerd[1909]: 2025-11-05 23:45:06.609 [INFO][5463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.133/32] ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.641944 containerd[1909]: 2025-11-05 23:45:06.609 [INFO][5463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc13008cc74 ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.641944 containerd[1909]: 2025-11-05 23:45:06.614 [INFO][5463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.641944 containerd[1909]: 2025-11-05 23:45:06.614 [INFO][5463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d92b57a-8924-49e1-8363-5659eabb3319", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac", Pod:"coredns-674b8bbfcf-4f9tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc13008cc74", MAC:"b6:71:e6:83:bf:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:06.641944 containerd[1909]: 2025-11-05 23:45:06.637 [INFO][5463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f9tn" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-coredns--674b8bbfcf--4f9tn-eth0" Nov 5 23:45:06.872391 systemd-networkd[1479]: cali92dabb72969: Link UP Nov 5 23:45:06.873647 systemd-networkd[1479]: cali92dabb72969: Gained carrier Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.749 [INFO][5493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0 calico-apiserver-c6d9d55f- calico-apiserver a68e46b0-801c-4548-82e3-d2eb8a4bb9ed 887 0 2025-11-05 23:43:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6d9d55f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 calico-apiserver-c6d9d55f-kpv2j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92dabb72969 [] [] }} ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.749 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.767 [INFO][5506] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" HandleID="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.767 [INFO][5506] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" HandleID="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"calico-apiserver-c6d9d55f-kpv2j", "timestamp":"2025-11-05 23:45:06.767423351 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.767 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.767 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.767 [INFO][5506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.774 [INFO][5506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.779 [INFO][5506] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.831 [INFO][5506] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.834 [INFO][5506] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.837 [INFO][5506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.837 [INFO][5506] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.839 [INFO][5506] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224 Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.844 [INFO][5506] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.861 [INFO][5506] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.134/26] block=192.168.92.128/26 handle="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.861 [INFO][5506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.134/26] handle="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.861 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:06.895518 containerd[1909]: 2025-11-05 23:45:06.861 [INFO][5506] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.134/26] IPv6=[] ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" HandleID="k8s-pod-network.551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.896209 containerd[1909]: 2025-11-05 23:45:06.866 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0", GenerateName:"calico-apiserver-c6d9d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68e46b0-801c-4548-82e3-d2eb8a4bb9ed", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6d9d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"calico-apiserver-c6d9d55f-kpv2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92dabb72969", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:06.896209 containerd[1909]: 2025-11-05 23:45:06.866 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.134/32] ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.896209 containerd[1909]: 2025-11-05 23:45:06.866 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92dabb72969 ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.896209 containerd[1909]: 2025-11-05 23:45:06.873 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.896209 containerd[1909]: 2025-11-05 23:45:06.875 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0", GenerateName:"calico-apiserver-c6d9d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68e46b0-801c-4548-82e3-d2eb8a4bb9ed", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6d9d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224", Pod:"calico-apiserver-c6d9d55f-kpv2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92dabb72969", MAC:"6a:b8:57:b9:d2:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:06.896209 containerd[1909]: 2025-11-05 23:45:06.892 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" Namespace="calico-apiserver" Pod="calico-apiserver-c6d9d55f-kpv2j" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--c6d9d55f--kpv2j-eth0" Nov 5 23:45:06.904128 containerd[1909]: time="2025-11-05T23:45:06.904082399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:06.960933 systemd-networkd[1479]: calic8ffe2433f8: Link UP Nov 5 23:45:06.961802 systemd-networkd[1479]: calic8ffe2433f8: Gained carrier Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.802 [INFO][5512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0 csi-node-driver- calico-system 427c3b5f-d7ee-4425-8185-ed4318a97b1f 727 0 2025-11-05 23:44:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 csi-node-driver-fpftq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic8ffe2433f8 [] [] }} ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.823 [INFO][5512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.849 [INFO][5527] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" HandleID="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.849 [INFO][5527] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" HandleID="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c8fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"csi-node-driver-fpftq", "timestamp":"2025-11-05 23:45:06.849598809 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.849 [INFO][5527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.861 [INFO][5527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.861 [INFO][5527] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.875 [INFO][5527] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.883 [INFO][5527] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.929 [INFO][5527] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.930 [INFO][5527] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.932 [INFO][5527] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.932 [INFO][5527] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.934 [INFO][5527] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74 Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.943 [INFO][5527] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.954 [INFO][5527] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.135/26] block=192.168.92.128/26 handle="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.955 [INFO][5527] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.135/26] handle="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.955 [INFO][5527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:06.982045 containerd[1909]: 2025-11-05 23:45:06.955 [INFO][5527] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.135/26] IPv6=[] ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" HandleID="k8s-pod-network.0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:06.982482 containerd[1909]: 2025-11-05 23:45:06.957 [INFO][5512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"427c3b5f-d7ee-4425-8185-ed4318a97b1f", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"csi-node-driver-fpftq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8ffe2433f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:06.982482 containerd[1909]: 2025-11-05 23:45:06.957 [INFO][5512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.135/32] ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:06.982482 containerd[1909]: 2025-11-05 23:45:06.957 [INFO][5512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8ffe2433f8 ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:06.982482 containerd[1909]: 2025-11-05 23:45:06.961 [INFO][5512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:06.982482 containerd[1909]: 2025-11-05 23:45:06.963 [INFO][5512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"427c3b5f-d7ee-4425-8185-ed4318a97b1f", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74", Pod:"csi-node-driver-fpftq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8ffe2433f8", MAC:"6e:e8:e9:20:c0:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:06.982482 containerd[1909]: 2025-11-05 23:45:06.978 [INFO][5512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" Namespace="calico-system" Pod="csi-node-driver-fpftq" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-csi--node--driver--fpftq-eth0" Nov 5 23:45:08.061848 systemd-networkd[1479]: cali488923edab0: Link UP Nov 5 23:45:08.062246 systemd-networkd[1479]: cali488923edab0: Gained carrier Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.000 [INFO][5553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0 calico-kube-controllers-7769f64cbc- calico-system 1177c853-8b74-4ffe-9eed-6c7edaf39ab6 878 0 2025-11-05 23:44:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7769f64cbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 calico-kube-controllers-7769f64cbc-fbmnx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali488923edab0 [] [] }} ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.000 [INFO][5553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.019 [INFO][5565] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" HandleID="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.019 [INFO][5565] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" HandleID="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"calico-kube-controllers-7769f64cbc-fbmnx", "timestamp":"2025-11-05 23:45:08.019429753 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.020 [INFO][5565] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.020 [INFO][5565] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.020 [INFO][5565] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.026 [INFO][5565] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.030 [INFO][5565] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.034 [INFO][5565] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.035 [INFO][5565] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.037 [INFO][5565] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.037 [INFO][5565] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.038 [INFO][5565] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69 Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.045 [INFO][5565] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.055 [INFO][5565] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.136/26] block=192.168.92.128/26 handle="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.056 [INFO][5565] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.136/26] handle="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.056 [INFO][5565] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:08.077064 containerd[1909]: 2025-11-05 23:45:08.056 [INFO][5565] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.136/26] IPv6=[] ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" HandleID="k8s-pod-network.41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.078375 containerd[1909]: 2025-11-05 23:45:08.059 [INFO][5553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0", GenerateName:"calico-kube-controllers-7769f64cbc-", Namespace:"calico-system", SelfLink:"", UID:"1177c853-8b74-4ffe-9eed-6c7edaf39ab6", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7769f64cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"calico-kube-controllers-7769f64cbc-fbmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali488923edab0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:08.078375 containerd[1909]: 2025-11-05 23:45:08.059 [INFO][5553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.136/32] ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.078375 containerd[1909]: 2025-11-05 23:45:08.059 [INFO][5553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali488923edab0 ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.078375 containerd[1909]: 2025-11-05 23:45:08.062 [INFO][5553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.078375 containerd[1909]: 2025-11-05 23:45:08.063 [INFO][5553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0", GenerateName:"calico-kube-controllers-7769f64cbc-", Namespace:"calico-system", SelfLink:"", UID:"1177c853-8b74-4ffe-9eed-6c7edaf39ab6", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7769f64cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69", Pod:"calico-kube-controllers-7769f64cbc-fbmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali488923edab0", MAC:"32:82:93:28:39:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:08.078375 containerd[1909]: 2025-11-05 23:45:08.073 [INFO][5553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" Namespace="calico-system" Pod="calico-kube-controllers-7769f64cbc-fbmnx" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--kube--controllers--7769f64cbc--fbmnx-eth0" Nov 5 23:45:08.140749 systemd-networkd[1479]: calic8ffe2433f8: Gained IPv6LL Nov 5 23:45:08.425025 containerd[1909]: time="2025-11-05T23:45:08.424458049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:08.425025 containerd[1909]: time="2025-11-05T23:45:08.424701984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:08.425364 kubelet[3539]: E1105 23:45:08.425293 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:08.425364 kubelet[3539]: E1105 23:45:08.425365 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:08.426875 kubelet[3539]: E1105 23:45:08.425526 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44jk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:08.426875 kubelet[3539]: E1105 23:45:08.426728 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:45:08.524783 systemd-networkd[1479]: cali92dabb72969: Gained IPv6LL Nov 5 23:45:08.588726 systemd-networkd[1479]: calicc13008cc74: Gained IPv6LL Nov 5 23:45:08.723883 systemd-networkd[1479]: cali22ededda08d: Link UP Nov 5 23:45:08.724556 systemd-networkd[1479]: cali22ededda08d: Gained carrier Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.654 [INFO][5587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0 calico-apiserver-66b68578ff- calico-apiserver 9f589f34-97ee-4d82-b7d6-bdd22dcbc743 876 0 2025-11-05 23:43:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b68578ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-7f88f0cba0 calico-apiserver-66b68578ff-w27fl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22ededda08d [] [] }} ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.654 [INFO][5587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.682 [INFO][5600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" HandleID="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.683 [INFO][5600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" HandleID="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-7f88f0cba0", "pod":"calico-apiserver-66b68578ff-w27fl", "timestamp":"2025-11-05 23:45:08.682376386 +0000 UTC"}, Hostname:"ci-4459.1.0-n-7f88f0cba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.683 [INFO][5600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.683 [INFO][5600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.683 [INFO][5600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-7f88f0cba0' Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.689 [INFO][5600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.692 [INFO][5600] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.696 [INFO][5600] ipam/ipam.go 511: Trying affinity for 192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.697 [INFO][5600] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.699 [INFO][5600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.699 [INFO][5600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.701 [INFO][5600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88 Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.707 [INFO][5600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.718 [INFO][5600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.137/26] block=192.168.92.128/26 handle="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.718 [INFO][5600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.137/26] handle="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" host="ci-4459.1.0-n-7f88f0cba0" Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.718 [INFO][5600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:08.749354 containerd[1909]: 2025-11-05 23:45:08.718 [INFO][5600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.137/26] IPv6=[] ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" HandleID="k8s-pod-network.09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Workload="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.749842 containerd[1909]: 2025-11-05 23:45:08.720 [INFO][5587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0", GenerateName:"calico-apiserver-66b68578ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f589f34-97ee-4d82-b7d6-bdd22dcbc743", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b68578ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"", Pod:"calico-apiserver-66b68578ff-w27fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22ededda08d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:08.749842 containerd[1909]: 2025-11-05 23:45:08.720 [INFO][5587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.137/32] ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.749842 containerd[1909]: 2025-11-05 23:45:08.720 [INFO][5587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22ededda08d ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.749842 containerd[1909]: 2025-11-05 23:45:08.725 [INFO][5587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.749842 containerd[1909]: 2025-11-05 23:45:08.731 [INFO][5587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0", GenerateName:"calico-apiserver-66b68578ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f589f34-97ee-4d82-b7d6-bdd22dcbc743", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b68578ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-7f88f0cba0", ContainerID:"09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88", Pod:"calico-apiserver-66b68578ff-w27fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22ededda08d", MAC:"42:c7:86:bb:20:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:08.749842 containerd[1909]: 2025-11-05 23:45:08.745 [INFO][5587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" Namespace="calico-apiserver" Pod="calico-apiserver-66b68578ff-w27fl" WorkloadEndpoint="ci--4459.1.0--n--7f88f0cba0-k8s-calico--apiserver--66b68578ff--w27fl-eth0" Nov 5 23:45:08.866322 kubelet[3539]: E1105 23:45:08.866250 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:45:09.484753 systemd-networkd[1479]: cali488923edab0: Gained IPv6LL Nov 5 23:45:09.842117 containerd[1909]: time="2025-11-05T23:45:09.842067886Z" level=info msg="connecting to shim 551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224" address="unix:///run/containerd/s/a206d1cafe7acd11d20ac6c170698878950ab62552e8dcbc438a45dd32102497" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:09.867744 systemd[1]: Started cri-containerd-551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224.scope - libcontainer container 551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224. Nov 5 23:45:09.935599 containerd[1909]: time="2025-11-05T23:45:09.935531063Z" level=info msg="connecting to shim 0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74" address="unix:///run/containerd/s/1b60079ed430cef2365f2c4b769e41a0f0c4f9a4b272ec701016f7334dcbe5a3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:09.955719 systemd[1]: Started cri-containerd-0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74.scope - libcontainer container 0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74. Nov 5 23:45:09.985580 containerd[1909]: time="2025-11-05T23:45:09.985439237Z" level=info msg="connecting to shim 52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac" address="unix:///run/containerd/s/2a3abbb5a5556d5c05fb4781cb10ef02a3b11b24eaaa3200254c55ec2f3e49fb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:10.002755 systemd[1]: Started cri-containerd-52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac.scope - libcontainer container 52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac. Nov 5 23:45:10.141199 containerd[1909]: time="2025-11-05T23:45:10.141030697Z" level=info msg="connecting to shim 41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69" address="unix:///run/containerd/s/a9307bab17aac73983afbf5a7f329daee9f9dca02c1632d43ab88712e8549814" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:10.159800 systemd[1]: Started cri-containerd-41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69.scope - libcontainer container 41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69. Nov 5 23:45:10.183174 containerd[1909]: time="2025-11-05T23:45:10.183131786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6d9d55f-kpv2j,Uid:a68e46b0-801c-4548-82e3-d2eb8a4bb9ed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"551335f486cf7ffc7b953c3fc07857afdea7ba142a104ebd082dd691ffe7a224\"" Nov 5 23:45:10.186947 containerd[1909]: time="2025-11-05T23:45:10.186845167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:10.189814 systemd-networkd[1479]: cali22ededda08d: Gained IPv6LL Nov 5 23:45:10.227337 containerd[1909]: time="2025-11-05T23:45:10.227226022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpftq,Uid:427c3b5f-d7ee-4425-8185-ed4318a97b1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c4addaa5e22c3684b6284f8660d26e9eeb883aedc45f3714eec0086a28a1a74\"" Nov 5 23:45:10.275099 containerd[1909]: time="2025-11-05T23:45:10.275055983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f9tn,Uid:4d92b57a-8924-49e1-8363-5659eabb3319,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac\"" Nov 5 23:45:10.285191 containerd[1909]: time="2025-11-05T23:45:10.285149030Z" level=info msg="connecting to shim 09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88" address="unix:///run/containerd/s/159b903173d5bf462b4aef354fe459b9db3c1d7fdbb0158035d6d10f390ec329" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:10.324102 containerd[1909]: time="2025-11-05T23:45:10.323935566Z" level=info msg="CreateContainer within sandbox \"52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:45:10.337736 systemd[1]: Started cri-containerd-09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88.scope - libcontainer container 09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88. Nov 5 23:45:10.369389 containerd[1909]: time="2025-11-05T23:45:10.369252309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7769f64cbc-fbmnx,Uid:1177c853-8b74-4ffe-9eed-6c7edaf39ab6,Namespace:calico-system,Attempt:0,} returns sandbox id \"41fdb03db927db3eb8f001003f5e4cd122b2d5ceb12777896328f155c18f4f69\"" Nov 5 23:45:10.526901 containerd[1909]: time="2025-11-05T23:45:10.526372407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b68578ff-w27fl,Uid:9f589f34-97ee-4d82-b7d6-bdd22dcbc743,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"09ff1acd869adba380b3beded9b4aca35a9faaacd34df651c036b06b42edcc88\"" Nov 5 23:45:10.562748 containerd[1909]: time="2025-11-05T23:45:10.562538154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:10.577592 containerd[1909]: time="2025-11-05T23:45:10.577539305Z" level=info msg="Container 428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:45:10.626612 containerd[1909]: time="2025-11-05T23:45:10.626475274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:10.626904 containerd[1909]: time="2025-11-05T23:45:10.626842541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:10.627182 kubelet[3539]: E1105 23:45:10.627089 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:10.627182 kubelet[3539]: E1105 23:45:10.627160 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:10.627884 kubelet[3539]: E1105 23:45:10.627379 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cjq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:10.628033 containerd[1909]: time="2025-11-05T23:45:10.627747816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:45:10.628515 kubelet[3539]: E1105 23:45:10.628484 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:45:10.774793 containerd[1909]: time="2025-11-05T23:45:10.774750137Z" level=info msg="CreateContainer within sandbox \"52e168a91bf32c4a17c917f9f42ba6074f3cf15bbe1e175cd3e9f40f06752cac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286\"" Nov 5 23:45:10.775453 containerd[1909]: time="2025-11-05T23:45:10.775421132Z" level=info msg="StartContainer for \"428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286\"" Nov 5 23:45:10.776451 containerd[1909]: time="2025-11-05T23:45:10.776416322Z" level=info msg="connecting to shim 428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286" address="unix:///run/containerd/s/2a3abbb5a5556d5c05fb4781cb10ef02a3b11b24eaaa3200254c55ec2f3e49fb" protocol=ttrpc version=3 Nov 5 23:45:10.793794 systemd[1]: Started cri-containerd-428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286.scope - libcontainer container 428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286. Nov 5 23:45:10.827734 containerd[1909]: time="2025-11-05T23:45:10.827621949Z" level=info msg="StartContainer for \"428a41ec667fa1d719a37d4b9169cc7dfc268842d706c4dcc756ea8c1de0d286\" returns successfully" Nov 5 23:45:10.873953 kubelet[3539]: E1105 23:45:10.873909 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:45:10.909284 kubelet[3539]: I1105 23:45:10.909221 3539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4f9tn" podStartSLOduration=86.909204619 podStartE2EDuration="1m26.909204619s" podCreationTimestamp="2025-11-05 23:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:45:10.908286632 +0000 UTC m=+93.366641531" watchObservedRunningTime="2025-11-05 23:45:10.909204619 +0000 UTC m=+93.367559462" Nov 5 23:45:11.043480 containerd[1909]: time="2025-11-05T23:45:11.043429782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:11.046954 containerd[1909]: time="2025-11-05T23:45:11.046875779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:45:11.046954 containerd[1909]: time="2025-11-05T23:45:11.046920316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:45:11.047896 kubelet[3539]: E1105 23:45:11.047333 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:11.048008 kubelet[3539]: E1105 23:45:11.047972 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:11.048590 kubelet[3539]: E1105 23:45:11.048198 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:11.049506 containerd[1909]: time="2025-11-05T23:45:11.049302394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:45:11.342106 containerd[1909]: time="2025-11-05T23:45:11.341546297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:11.346919 containerd[1909]: time="2025-11-05T23:45:11.346811451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:45:11.346919 containerd[1909]: time="2025-11-05T23:45:11.346855900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:11.347265 kubelet[3539]: E1105 23:45:11.347222 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:11.347320 kubelet[3539]: E1105 23:45:11.347284 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:11.347498 kubelet[3539]: E1105 23:45:11.347462 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hngt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:11.347920 containerd[1909]: time="2025-11-05T23:45:11.347885938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:11.349018 kubelet[3539]: E1105 23:45:11.348985 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:45:11.623708 containerd[1909]: time="2025-11-05T23:45:11.623547604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:11.627499 containerd[1909]: time="2025-11-05T23:45:11.627401485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:11.627499 containerd[1909]: time="2025-11-05T23:45:11.627408061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:11.627765 kubelet[3539]: E1105 23:45:11.627719 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:11.628063 kubelet[3539]: E1105 23:45:11.627771 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:11.628063 kubelet[3539]: E1105 23:45:11.627983 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hzvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:11.628756 containerd[1909]: time="2025-11-05T23:45:11.628729188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:45:11.629119 kubelet[3539]: E1105 23:45:11.629051 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:45:11.870437 containerd[1909]: time="2025-11-05T23:45:11.870390753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:11.873455 containerd[1909]: time="2025-11-05T23:45:11.873418201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:45:11.873510 containerd[1909]: time="2025-11-05T23:45:11.873502428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:45:11.873835 kubelet[3539]: E1105 23:45:11.873685 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:11.873835 kubelet[3539]: E1105 23:45:11.873753 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:11.874410 kubelet[3539]: E1105 23:45:11.874367 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:11.875680 kubelet[3539]: E1105 23:45:11.875629 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:45:11.881844 kubelet[3539]: E1105 23:45:11.881200 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:45:11.881844 kubelet[3539]: E1105 23:45:11.881371 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:45:11.882440 kubelet[3539]: E1105 23:45:11.882332 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:45:11.883542 kubelet[3539]: E1105 23:45:11.883397 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:45:17.636290 containerd[1909]: time="2025-11-05T23:45:17.636080288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:45:17.870879 containerd[1909]: time="2025-11-05T23:45:17.870801915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:17.874375 containerd[1909]: time="2025-11-05T23:45:17.874332781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:45:17.874526 containerd[1909]: time="2025-11-05T23:45:17.874426992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:17.875228 kubelet[3539]: E1105 23:45:17.874704 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:17.875228 kubelet[3539]: E1105 23:45:17.874762 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:17.875228 kubelet[3539]: E1105 23:45:17.874878 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dg9nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:17.876619 kubelet[3539]: E1105 23:45:17.876327 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:18.637388 containerd[1909]: time="2025-11-05T23:45:18.636437051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:45:18.879345 containerd[1909]: time="2025-11-05T23:45:18.879293312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:18.883026 containerd[1909]: time="2025-11-05T23:45:18.882978726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:45:18.883101 containerd[1909]: time="2025-11-05T23:45:18.883064208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:45:18.883278 kubelet[3539]: E1105 23:45:18.883236 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:18.883553 kubelet[3539]: E1105 23:45:18.883289 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:18.883553 kubelet[3539]: E1105 23:45:18.883393 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:034e9f0b530a4b44bd7c28fc81170f33,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:18.885594 containerd[1909]: time="2025-11-05T23:45:18.885445914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:45:19.161290 containerd[1909]: time="2025-11-05T23:45:19.161232599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:19.164837 containerd[1909]: time="2025-11-05T23:45:19.164793345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:45:19.164896 containerd[1909]: time="2025-11-05T23:45:19.164877188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:19.165118 kubelet[3539]: E1105 23:45:19.165038 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:19.165118 kubelet[3539]: E1105 23:45:19.165096 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:19.165526 kubelet[3539]: E1105 23:45:19.165471 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:19.166849 kubelet[3539]: E1105 23:45:19.166809 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:21.874011 containerd[1909]: time="2025-11-05T23:45:21.873972103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\" id:\"0a149cc56a9449664c91752f97482735b3f80e009656da4a5489a87c028e8eac\" pid:5911 exited_at:{seconds:1762386321 nanos:873249627}" Nov 5 23:45:21.953308 containerd[1909]: time="2025-11-05T23:45:21.953239460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\" id:\"35d517564c37412c28c572f1be2e67f1a707a1e3d55903f9a1d63d005939251e\" pid:5934 exited_at:{seconds:1762386321 nanos:952870970}" Nov 5 23:45:22.637979 containerd[1909]: time="2025-11-05T23:45:22.637184859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:45:22.933631 containerd[1909]: time="2025-11-05T23:45:22.933242018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:22.938916 containerd[1909]: time="2025-11-05T23:45:22.938817349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:45:22.938916 containerd[1909]: time="2025-11-05T23:45:22.938863470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:45:22.939189 kubelet[3539]: E1105 23:45:22.939143 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:22.939411 kubelet[3539]: E1105 23:45:22.939196 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:22.939912 kubelet[3539]: E1105 23:45:22.939320 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:22.942660 containerd[1909]: time="2025-11-05T23:45:22.942566132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:45:23.227316 containerd[1909]: time="2025-11-05T23:45:23.227180998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:23.231344 containerd[1909]: time="2025-11-05T23:45:23.231302240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:45:23.231463 containerd[1909]: time="2025-11-05T23:45:23.231391859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:45:23.231565 kubelet[3539]: E1105 23:45:23.231521 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:23.231738 kubelet[3539]: E1105 23:45:23.231588 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:23.231738 kubelet[3539]: E1105 23:45:23.231698 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:23.232976 kubelet[3539]: E1105 23:45:23.232927 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:45:23.637964 containerd[1909]: time="2025-11-05T23:45:23.637920625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:23.885829 containerd[1909]: time="2025-11-05T23:45:23.885777392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:23.889239 containerd[1909]: time="2025-11-05T23:45:23.889144733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:23.889239 containerd[1909]: time="2025-11-05T23:45:23.889221543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:23.889610 kubelet[3539]: E1105 23:45:23.889376 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:23.889610 kubelet[3539]: E1105 23:45:23.889427 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:23.889877 kubelet[3539]: E1105 23:45:23.889829 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cjq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:23.890216 containerd[1909]: time="2025-11-05T23:45:23.890190922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:23.891132 kubelet[3539]: E1105 23:45:23.891074 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:45:24.136479 containerd[1909]: time="2025-11-05T23:45:24.136429877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:24.139709 containerd[1909]: time="2025-11-05T23:45:24.139612005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:24.139709 containerd[1909]: time="2025-11-05T23:45:24.139689847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:24.139994 kubelet[3539]: E1105 23:45:24.139932 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:24.140353 kubelet[3539]: E1105 23:45:24.140080 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:24.140353 kubelet[3539]: E1105 23:45:24.140230 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44jk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:24.141459 kubelet[3539]: E1105 23:45:24.141422 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:45:25.638053 containerd[1909]: time="2025-11-05T23:45:25.637926300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:26.059256 containerd[1909]: time="2025-11-05T23:45:26.058992169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:26.062025 containerd[1909]: time="2025-11-05T23:45:26.061917763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:26.062025 containerd[1909]: time="2025-11-05T23:45:26.061963228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:26.062558 kubelet[3539]: E1105 23:45:26.062501 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:26.063023 kubelet[3539]: E1105 23:45:26.062618 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:26.063461 kubelet[3539]: E1105 23:45:26.063343 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hzvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:26.064647 kubelet[3539]: E1105 23:45:26.064546 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:45:26.638204 containerd[1909]: time="2025-11-05T23:45:26.638150575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:45:26.917561 containerd[1909]: time="2025-11-05T23:45:26.917312842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:26.919961 containerd[1909]: time="2025-11-05T23:45:26.919918707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:45:26.920078 containerd[1909]: time="2025-11-05T23:45:26.919929603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:26.920234 kubelet[3539]: E1105 23:45:26.920201 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:26.920335 kubelet[3539]: E1105 23:45:26.920320 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:26.920891 kubelet[3539]: E1105 23:45:26.920520 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hngt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:26.922612 kubelet[3539]: E1105 23:45:26.922552 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:45:31.638107 kubelet[3539]: E1105 23:45:31.637985 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:33.637697 kubelet[3539]: E1105 23:45:33.637384 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:33.637697 kubelet[3539]: E1105 23:45:33.637509 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:45:37.638387 kubelet[3539]: E1105 23:45:37.638341 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:45:37.639753 kubelet[3539]: E1105 23:45:37.638501 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:45:38.637050 kubelet[3539]: E1105 23:45:38.636466 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:45:40.636206 kubelet[3539]: E1105 23:45:40.636151 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:45:42.637256 containerd[1909]: time="2025-11-05T23:45:42.636996591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:45:42.892194 containerd[1909]: time="2025-11-05T23:45:42.891590142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:42.894514 containerd[1909]: time="2025-11-05T23:45:42.894471955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:45:42.894618 containerd[1909]: time="2025-11-05T23:45:42.894560789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:42.894790 kubelet[3539]: E1105 23:45:42.894753 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:42.895697 kubelet[3539]: E1105 23:45:42.895656 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:42.896083 kubelet[3539]: E1105 23:45:42.895817 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dg9nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:42.897013 kubelet[3539]: E1105 23:45:42.896987 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:44.637744 containerd[1909]: time="2025-11-05T23:45:44.637698975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:45:44.946822 containerd[1909]: time="2025-11-05T23:45:44.946555481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:44.949128 containerd[1909]: time="2025-11-05T23:45:44.949089316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:45:44.949200 containerd[1909]: time="2025-11-05T23:45:44.949188975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:45:44.949360 kubelet[3539]: E1105 23:45:44.949322 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:44.949639 kubelet[3539]: E1105 23:45:44.949371 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:44.950120 kubelet[3539]: E1105 23:45:44.949623 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:44.950204 containerd[1909]: time="2025-11-05T23:45:44.949844368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:45:45.197679 containerd[1909]: time="2025-11-05T23:45:45.197420622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:45.201242 containerd[1909]: time="2025-11-05T23:45:45.200865801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:45:45.201549 containerd[1909]: time="2025-11-05T23:45:45.200891554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:45:45.201673 kubelet[3539]: E1105 23:45:45.201635 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:45.202032 kubelet[3539]: E1105 23:45:45.201727 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:45.202605 kubelet[3539]: E1105 23:45:45.202270 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:034e9f0b530a4b44bd7c28fc81170f33,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:45.204015 containerd[1909]: time="2025-11-05T23:45:45.203990692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:45:45.480785 containerd[1909]: time="2025-11-05T23:45:45.480650106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:45.483722 containerd[1909]: time="2025-11-05T23:45:45.483666906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:45:45.484005 containerd[1909]: time="2025-11-05T23:45:45.483771589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:45:45.484196 kubelet[3539]: E1105 23:45:45.484094 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:45.484339 kubelet[3539]: E1105 23:45:45.484281 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:45.484588 kubelet[3539]: E1105 23:45:45.484512 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:45.484928 containerd[1909]: time="2025-11-05T23:45:45.484906339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:45:45.485647 kubelet[3539]: E1105 23:45:45.485614 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:45:45.769827 containerd[1909]: time="2025-11-05T23:45:45.769700105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:45.772853 containerd[1909]: time="2025-11-05T23:45:45.772812124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:45:45.772990 containerd[1909]: time="2025-11-05T23:45:45.772888102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:45.773171 kubelet[3539]: E1105 23:45:45.773100 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:45.773171 kubelet[3539]: E1105 23:45:45.773150 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:45.773584 kubelet[3539]: E1105 23:45:45.773359 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:45.774808 kubelet[3539]: E1105 23:45:45.774779 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:51.637731 containerd[1909]: time="2025-11-05T23:45:51.637681760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:51.898630 containerd[1909]: time="2025-11-05T23:45:51.898428568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:51.901833 containerd[1909]: time="2025-11-05T23:45:51.901742837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:51.901833 containerd[1909]: time="2025-11-05T23:45:51.901823959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:51.902366 kubelet[3539]: E1105 23:45:51.902225 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:51.902366 kubelet[3539]: E1105 23:45:51.902289 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:51.903955 containerd[1909]: time="2025-11-05T23:45:51.903913473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:51.904430 kubelet[3539]: E1105 23:45:51.904352 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hzvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:51.905622 kubelet[3539]: E1105 23:45:51.905546 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:45:51.979000 containerd[1909]: time="2025-11-05T23:45:51.978776684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\" id:\"e25f475e36e52e28f6710950ba7cfe6fbef54329770dbf3b3f56cf5c2a03143f\" pid:5978 exited_at:{seconds:1762386351 nanos:978134714}" Nov 5 23:45:52.191282 containerd[1909]: time="2025-11-05T23:45:52.190984785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:52.193856 containerd[1909]: time="2025-11-05T23:45:52.193705509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:52.193856 containerd[1909]: time="2025-11-05T23:45:52.193826776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:52.194285 kubelet[3539]: E1105 23:45:52.194223 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:52.194379 kubelet[3539]: E1105 23:45:52.194295 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:52.195645 kubelet[3539]: E1105 23:45:52.194434 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44jk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:52.196110 kubelet[3539]: E1105 23:45:52.196065 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:45:52.638121 containerd[1909]: time="2025-11-05T23:45:52.637851159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:52.930096 containerd[1909]: time="2025-11-05T23:45:52.929552928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:52.933350 containerd[1909]: time="2025-11-05T23:45:52.933201270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:52.933350 containerd[1909]: time="2025-11-05T23:45:52.933245655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:52.934411 kubelet[3539]: E1105 23:45:52.934343 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:52.934879 kubelet[3539]: E1105 23:45:52.934424 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:52.934879 kubelet[3539]: E1105 23:45:52.934570 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cjq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:52.935879 kubelet[3539]: E1105 23:45:52.935796 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:45:53.638436 containerd[1909]: time="2025-11-05T23:45:53.638327631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:45:53.916134 containerd[1909]: time="2025-11-05T23:45:53.915933479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:53.919153 containerd[1909]: time="2025-11-05T23:45:53.919089615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:45:53.919664 containerd[1909]: time="2025-11-05T23:45:53.919119792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:53.919712 kubelet[3539]: E1105 23:45:53.919373 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:53.919712 kubelet[3539]: E1105 23:45:53.919429 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:53.919843 kubelet[3539]: E1105 23:45:53.919567 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hngt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:53.921066 kubelet[3539]: E1105 23:45:53.921021 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:45:56.638382 kubelet[3539]: E1105 23:45:56.638332 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:45:56.639029 kubelet[3539]: E1105 23:45:56.638989 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:45:57.307362 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:39262.service - OpenSSH per-connection server daemon (10.200.16.10:39262). Nov 5 23:45:57.737278 sshd[5995]: Accepted publickey for core from 10.200.16.10 port 39262 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:45:57.739699 sshd-session[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:57.744641 systemd-logind[1867]: New session 10 of user core. Nov 5 23:45:57.748789 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 23:45:58.138078 sshd[5998]: Connection closed by 10.200.16.10 port 39262 Nov 5 23:45:58.138532 sshd-session[5995]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:58.144213 systemd-logind[1867]: Session 10 logged out. Waiting for processes to exit. Nov 5 23:45:58.144778 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:39262.service: Deactivated successfully. Nov 5 23:45:58.149514 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 23:45:58.152711 systemd-logind[1867]: Removed session 10. Nov 5 23:45:59.638193 kubelet[3539]: E1105 23:45:59.638141 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:46:02.636494 kubelet[3539]: E1105 23:46:02.636401 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:46:03.215224 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:59876.service - OpenSSH per-connection server daemon (10.200.16.10:59876). Nov 5 23:46:03.648745 sshd[6012]: Accepted publickey for core from 10.200.16.10 port 59876 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:03.650132 sshd-session[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:03.657826 systemd-logind[1867]: New session 11 of user core. Nov 5 23:46:03.660776 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 23:46:04.001406 sshd[6015]: Connection closed by 10.200.16.10 port 59876 Nov 5 23:46:04.001530 sshd-session[6012]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:04.006535 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:59876.service: Deactivated successfully. Nov 5 23:46:04.009429 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 23:46:04.012778 systemd-logind[1867]: Session 11 logged out. Waiting for processes to exit. Nov 5 23:46:04.015142 systemd-logind[1867]: Removed session 11. Nov 5 23:46:06.636375 kubelet[3539]: E1105 23:46:06.636075 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:46:07.636459 kubelet[3539]: E1105 23:46:07.636151 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:46:08.636560 kubelet[3539]: E1105 23:46:08.636510 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:46:08.637406 kubelet[3539]: E1105 23:46:08.637378 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:46:09.079971 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:59888.service - OpenSSH per-connection server daemon (10.200.16.10:59888). Nov 5 23:46:09.508447 sshd[6027]: Accepted publickey for core from 10.200.16.10 port 59888 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:09.509769 sshd-session[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:09.518381 systemd-logind[1867]: New session 12 of user core. Nov 5 23:46:09.524786 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 23:46:09.918429 sshd[6030]: Connection closed by 10.200.16.10 port 59888 Nov 5 23:46:09.918250 sshd-session[6027]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:09.923969 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:59888.service: Deactivated successfully. Nov 5 23:46:09.927380 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 23:46:09.931226 systemd-logind[1867]: Session 12 logged out. Waiting for processes to exit. Nov 5 23:46:09.932302 systemd-logind[1867]: Removed session 12. Nov 5 23:46:10.006909 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:48078.service - OpenSSH per-connection server daemon (10.200.16.10:48078). Nov 5 23:46:10.479688 sshd[6043]: Accepted publickey for core from 10.200.16.10 port 48078 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:10.480950 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:10.488116 systemd-logind[1867]: New session 13 of user core. Nov 5 23:46:10.491932 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 23:46:10.636861 kubelet[3539]: E1105 23:46:10.636788 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:46:10.903112 sshd[6047]: Connection closed by 10.200.16.10 port 48078 Nov 5 23:46:10.904815 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:10.909236 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:48078.service: Deactivated successfully. Nov 5 23:46:10.912358 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 23:46:10.916873 systemd-logind[1867]: Session 13 logged out. Waiting for processes to exit. Nov 5 23:46:10.921085 systemd-logind[1867]: Removed session 13. Nov 5 23:46:10.982646 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:48080.service - OpenSSH per-connection server daemon (10.200.16.10:48080). Nov 5 23:46:11.415410 sshd[6057]: Accepted publickey for core from 10.200.16.10 port 48080 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:11.416720 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:11.424161 systemd-logind[1867]: New session 14 of user core. Nov 5 23:46:11.432438 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 23:46:11.798439 sshd[6060]: Connection closed by 10.200.16.10 port 48080 Nov 5 23:46:11.799310 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:11.804873 systemd-logind[1867]: Session 14 logged out. Waiting for processes to exit. Nov 5 23:46:11.806297 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:48080.service: Deactivated successfully. Nov 5 23:46:11.811993 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 23:46:11.816817 systemd-logind[1867]: Removed session 14. Nov 5 23:46:13.642600 kubelet[3539]: E1105 23:46:13.642112 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:46:16.896817 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:48082.service - OpenSSH per-connection server daemon (10.200.16.10:48082). Nov 5 23:46:17.394441 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 48082 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:17.396238 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:17.400646 systemd-logind[1867]: New session 15 of user core. Nov 5 23:46:17.405776 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 23:46:17.638620 kubelet[3539]: E1105 23:46:17.638460 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:46:17.802591 sshd[6082]: Connection closed by 10.200.16.10 port 48082 Nov 5 23:46:17.803441 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:17.808244 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:48082.service: Deactivated successfully. Nov 5 23:46:17.812569 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 23:46:17.816651 systemd-logind[1867]: Session 15 logged out. Waiting for processes to exit. Nov 5 23:46:17.818720 systemd-logind[1867]: Removed session 15. Nov 5 23:46:19.636831 kubelet[3539]: E1105 23:46:19.636773 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:46:20.638241 kubelet[3539]: E1105 23:46:20.637918 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:46:20.640792 kubelet[3539]: E1105 23:46:20.640738 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:46:21.637947 kubelet[3539]: E1105 23:46:21.637809 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:46:21.953947 containerd[1909]: time="2025-11-05T23:46:21.953282517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\" id:\"b74d955b4c499ad1ca4a4cd0e766f01d160c296ebbbee71cceb864c9a8dd8806\" pid:6111 exited_at:{seconds:1762386381 nanos:951713075}" Nov 5 23:46:22.880863 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:55548.service - OpenSSH per-connection server daemon (10.200.16.10:55548). Nov 5 23:46:23.351556 sshd[6123]: Accepted publickey for core from 10.200.16.10 port 55548 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:23.352671 sshd-session[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:23.357246 systemd-logind[1867]: New session 16 of user core. Nov 5 23:46:23.364812 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 23:46:23.638106 kubelet[3539]: E1105 23:46:23.637972 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:46:23.735309 sshd[6126]: Connection closed by 10.200.16.10 port 55548 Nov 5 23:46:23.736602 sshd-session[6123]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:23.742504 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:55548.service: Deactivated successfully. Nov 5 23:46:23.745223 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 23:46:23.746375 systemd-logind[1867]: Session 16 logged out. Waiting for processes to exit. Nov 5 23:46:23.748462 systemd-logind[1867]: Removed session 16. Nov 5 23:46:25.638325 containerd[1909]: time="2025-11-05T23:46:25.638271920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:46:25.935796 containerd[1909]: time="2025-11-05T23:46:25.935651756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:25.938261 containerd[1909]: time="2025-11-05T23:46:25.938197455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:46:25.938605 containerd[1909]: time="2025-11-05T23:46:25.938234600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:46:25.938651 kubelet[3539]: E1105 23:46:25.938466 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:46:25.938651 kubelet[3539]: E1105 23:46:25.938518 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:46:25.939128 kubelet[3539]: E1105 23:46:25.938665 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:25.942674 containerd[1909]: time="2025-11-05T23:46:25.942549122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:46:26.227223 containerd[1909]: time="2025-11-05T23:46:26.227081908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:26.230355 containerd[1909]: time="2025-11-05T23:46:26.230284216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:46:26.230773 containerd[1909]: time="2025-11-05T23:46:26.230321801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:46:26.230838 kubelet[3539]: E1105 23:46:26.230772 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:46:26.230838 kubelet[3539]: E1105 23:46:26.230835 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:46:26.230985 kubelet[3539]: E1105 23:46:26.230952 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fpftq_calico-system(427c3b5f-d7ee-4425-8185-ed4318a97b1f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:26.232316 kubelet[3539]: E1105 23:46:26.232278 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:46:28.818511 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:55564.service - OpenSSH per-connection server daemon (10.200.16.10:55564). Nov 5 23:46:29.247494 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 55564 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:29.248394 sshd-session[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:29.252769 systemd-logind[1867]: New session 17 of user core. Nov 5 23:46:29.259898 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 23:46:29.638743 sshd[6142]: Connection closed by 10.200.16.10 port 55564 Nov 5 23:46:29.639175 kubelet[3539]: E1105 23:46:29.638157 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:46:29.639921 sshd-session[6139]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:29.644129 systemd-logind[1867]: Session 17 logged out. Waiting for processes to exit. Nov 5 23:46:29.645533 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:55564.service: Deactivated successfully. Nov 5 23:46:29.648652 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 23:46:29.654446 systemd-logind[1867]: Removed session 17. Nov 5 23:46:29.719318 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:55572.service - OpenSSH per-connection server daemon (10.200.16.10:55572). Nov 5 23:46:30.146646 sshd[6153]: Accepted publickey for core from 10.200.16.10 port 55572 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:30.148213 sshd-session[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:30.155531 systemd-logind[1867]: New session 18 of user core. Nov 5 23:46:30.161841 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 23:46:30.624748 sshd[6156]: Connection closed by 10.200.16.10 port 55572 Nov 5 23:46:30.625387 sshd-session[6153]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:30.630067 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:55572.service: Deactivated successfully. Nov 5 23:46:30.632441 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 23:46:30.633424 systemd-logind[1867]: Session 18 logged out. Waiting for processes to exit. Nov 5 23:46:30.636976 systemd-logind[1867]: Removed session 18. Nov 5 23:46:30.708851 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:42752.service - OpenSSH per-connection server daemon (10.200.16.10:42752). Nov 5 23:46:31.168662 sshd[6166]: Accepted publickey for core from 10.200.16.10 port 42752 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:31.170115 sshd-session[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:31.177010 systemd-logind[1867]: New session 19 of user core. Nov 5 23:46:31.184101 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 23:46:31.639201 kubelet[3539]: E1105 23:46:31.638821 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:46:31.919940 sshd[6169]: Connection closed by 10.200.16.10 port 42752 Nov 5 23:46:31.920570 sshd-session[6166]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:31.925887 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:42752.service: Deactivated successfully. Nov 5 23:46:31.929853 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 23:46:31.931537 systemd-logind[1867]: Session 19 logged out. Waiting for processes to exit. Nov 5 23:46:31.932936 systemd-logind[1867]: Removed session 19. Nov 5 23:46:32.021368 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:42766.service - OpenSSH per-connection server daemon (10.200.16.10:42766). Nov 5 23:46:32.489885 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 42766 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:32.491098 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:32.500257 systemd-logind[1867]: New session 20 of user core. Nov 5 23:46:32.503877 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 23:46:32.984605 sshd[6198]: Connection closed by 10.200.16.10 port 42766 Nov 5 23:46:32.985252 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:32.990189 systemd-logind[1867]: Session 20 logged out. Waiting for processes to exit. Nov 5 23:46:32.991267 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:42766.service: Deactivated successfully. Nov 5 23:46:32.993431 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 23:46:32.997272 systemd-logind[1867]: Removed session 20. Nov 5 23:46:33.065174 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:42776.service - OpenSSH per-connection server daemon (10.200.16.10:42776). Nov 5 23:46:33.530919 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 42776 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:33.532158 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:33.538630 systemd-logind[1867]: New session 21 of user core. Nov 5 23:46:33.543838 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 23:46:33.638808 containerd[1909]: time="2025-11-05T23:46:33.638678392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:46:33.908400 sshd[6211]: Connection closed by 10.200.16.10 port 42776 Nov 5 23:46:33.909106 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:33.911876 containerd[1909]: time="2025-11-05T23:46:33.911694912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:33.913039 systemd-logind[1867]: Session 21 logged out. Waiting for processes to exit. Nov 5 23:46:33.913170 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:42776.service: Deactivated successfully. Nov 5 23:46:33.915111 containerd[1909]: time="2025-11-05T23:46:33.914982999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:46:33.915111 containerd[1909]: time="2025-11-05T23:46:33.915030880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:46:33.915309 kubelet[3539]: E1105 23:46:33.915264 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:46:33.916163 kubelet[3539]: E1105 23:46:33.915319 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:46:33.916163 kubelet[3539]: E1105 23:46:33.915536 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44jk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-625lj_calico-apiserver(486c3bf3-5c4f-4ba5-b692-994994d35c51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:33.916747 containerd[1909]: time="2025-11-05T23:46:33.916481963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:46:33.916949 kubelet[3539]: E1105 23:46:33.916887 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:46:33.920008 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 23:46:33.926269 systemd-logind[1867]: Removed session 21. Nov 5 23:46:34.181039 containerd[1909]: time="2025-11-05T23:46:34.180593959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:34.184815 containerd[1909]: time="2025-11-05T23:46:34.184724817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:46:34.184815 containerd[1909]: time="2025-11-05T23:46:34.184783242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:46:34.185159 kubelet[3539]: E1105 23:46:34.185052 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:46:34.185159 kubelet[3539]: E1105 23:46:34.185109 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:46:34.185286 kubelet[3539]: E1105 23:46:34.185235 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dg9nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g8mz5_calico-system(ffd109d6-81d2-474d-9a1e-5493102832d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:34.186791 kubelet[3539]: E1105 23:46:34.186724 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:46:34.636834 containerd[1909]: time="2025-11-05T23:46:34.636595142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:46:34.940360 containerd[1909]: time="2025-11-05T23:46:34.940081489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:34.943766 containerd[1909]: time="2025-11-05T23:46:34.943677817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:46:34.943766 containerd[1909]: time="2025-11-05T23:46:34.943731122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:46:34.944009 kubelet[3539]: E1105 23:46:34.943964 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:46:34.944240 kubelet[3539]: E1105 23:46:34.944020 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:46:34.944240 kubelet[3539]: E1105 23:46:34.944143 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cjq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c6d9d55f-kpv2j_calico-apiserver(a68e46b0-801c-4548-82e3-d2eb8a4bb9ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:34.945369 kubelet[3539]: E1105 23:46:34.945334 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:46:38.637318 containerd[1909]: time="2025-11-05T23:46:38.637273548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:46:38.895068 containerd[1909]: time="2025-11-05T23:46:38.894925261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:38.898277 containerd[1909]: time="2025-11-05T23:46:38.898210420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:46:38.898429 containerd[1909]: time="2025-11-05T23:46:38.898319175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:46:38.899955 kubelet[3539]: E1105 23:46:38.899739 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:46:38.899955 kubelet[3539]: E1105 23:46:38.899796 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:46:38.899955 kubelet[3539]: E1105 23:46:38.899916 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:034e9f0b530a4b44bd7c28fc81170f33,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:38.902734 containerd[1909]: time="2025-11-05T23:46:38.902697050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:46:38.992547 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:42778.service - OpenSSH per-connection server daemon (10.200.16.10:42778). Nov 5 23:46:39.140540 containerd[1909]: time="2025-11-05T23:46:39.140331755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:39.143409 containerd[1909]: time="2025-11-05T23:46:39.143343123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:46:39.143642 containerd[1909]: time="2025-11-05T23:46:39.143520079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:46:39.143940 kubelet[3539]: E1105 23:46:39.143843 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:46:39.143940 kubelet[3539]: E1105 23:46:39.143923 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:46:39.144805 kubelet[3539]: E1105 23:46:39.144251 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6856bc974c-ffszr_calico-system(4aba500c-946b-4268-bb47-e30c6e97daba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:39.145999 kubelet[3539]: E1105 23:46:39.145631 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:46:39.462404 sshd[6242]: Accepted publickey for core from 10.200.16.10 port 42778 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:39.463803 sshd-session[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:39.468455 systemd-logind[1867]: New session 22 of user core. Nov 5 23:46:39.476778 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 23:46:39.838641 sshd[6245]: Connection closed by 10.200.16.10 port 42778 Nov 5 23:46:39.839279 sshd-session[6242]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:39.846539 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:42778.service: Deactivated successfully. Nov 5 23:46:39.850130 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 23:46:39.852733 systemd-logind[1867]: Session 22 logged out. Waiting for processes to exit. Nov 5 23:46:39.854891 systemd-logind[1867]: Removed session 22. Nov 5 23:46:41.638092 containerd[1909]: time="2025-11-05T23:46:41.637797439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:46:41.641238 kubelet[3539]: E1105 23:46:41.641185 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:46:41.882763 containerd[1909]: time="2025-11-05T23:46:41.882701904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:41.887011 containerd[1909]: time="2025-11-05T23:46:41.886869846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:46:41.887011 containerd[1909]: time="2025-11-05T23:46:41.886982505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:46:41.887589 kubelet[3539]: E1105 23:46:41.887412 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:46:41.887589 kubelet[3539]: E1105 23:46:41.887482 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:46:41.887780 kubelet[3539]: E1105 23:46:41.887747 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hzvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b68578ff-w27fl_calico-apiserver(9f589f34-97ee-4d82-b7d6-bdd22dcbc743): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:41.889479 kubelet[3539]: E1105 23:46:41.888992 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:46:44.637611 containerd[1909]: time="2025-11-05T23:46:44.637555574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:46:44.910609 containerd[1909]: time="2025-11-05T23:46:44.910176914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:46:44.916360 containerd[1909]: time="2025-11-05T23:46:44.916219650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:46:44.916360 containerd[1909]: time="2025-11-05T23:46:44.916339981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:46:44.916509 kubelet[3539]: E1105 23:46:44.916460 3539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:46:44.916762 kubelet[3539]: E1105 23:46:44.916524 3539 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:46:44.917674 kubelet[3539]: E1105 23:46:44.917496 3539 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hngt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7769f64cbc-fbmnx_calico-system(1177c853-8b74-4ffe-9eed-6c7edaf39ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:46:44.919254 kubelet[3539]: E1105 23:46:44.918660 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:46:44.921050 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:44926.service - OpenSSH per-connection server daemon (10.200.16.10:44926). Nov 5 23:46:45.388388 sshd[6257]: Accepted publickey for core from 10.200.16.10 port 44926 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:45.389867 sshd-session[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:45.396803 systemd-logind[1867]: New session 23 of user core. Nov 5 23:46:45.400743 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 23:46:45.641701 kubelet[3539]: E1105 23:46:45.639074 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:46:45.641701 kubelet[3539]: E1105 23:46:45.639210 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:46:45.811791 sshd[6260]: Connection closed by 10.200.16.10 port 44926 Nov 5 23:46:45.813680 sshd-session[6257]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:45.818963 systemd-logind[1867]: Session 23 logged out. Waiting for processes to exit. Nov 5 23:46:45.819069 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:44926.service: Deactivated successfully. Nov 5 23:46:45.823434 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 23:46:45.827292 systemd-logind[1867]: Removed session 23. Nov 5 23:46:48.636778 kubelet[3539]: E1105 23:46:48.636706 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:46:50.894930 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:56384.service - OpenSSH per-connection server daemon (10.200.16.10:56384). Nov 5 23:46:51.352632 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 56384 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:51.353932 sshd-session[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:51.360803 systemd-logind[1867]: New session 24 of user core. Nov 5 23:46:51.367758 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 23:46:51.741821 sshd[6280]: Connection closed by 10.200.16.10 port 56384 Nov 5 23:46:51.742458 sshd-session[6274]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:51.746346 systemd-logind[1867]: Session 24 logged out. Waiting for processes to exit. Nov 5 23:46:51.747175 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:56384.service: Deactivated successfully. Nov 5 23:46:51.750964 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 23:46:51.753526 systemd-logind[1867]: Removed session 24. Nov 5 23:46:51.954513 containerd[1909]: time="2025-11-05T23:46:51.954427224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c5635b94672f6f4fddb359c96d8cdceb0dbf0f883f7e730e1dad340e61fe124\" id:\"055526e50e450cba4af4177c9eed0f73ef093b4a39be9caf83307073f9575631\" pid:6305 exited_at:{seconds:1762386411 nanos:953816697}" Nov 5 23:46:52.637346 kubelet[3539]: E1105 23:46:52.636761 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:46:53.638549 kubelet[3539]: E1105 23:46:53.638470 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba" Nov 5 23:46:54.638333 kubelet[3539]: E1105 23:46:54.638274 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fpftq" podUID="427c3b5f-d7ee-4425-8185-ed4318a97b1f" Nov 5 23:46:56.636313 kubelet[3539]: E1105 23:46:56.636269 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g8mz5" podUID="ffd109d6-81d2-474d-9a1e-5493102832d2" Nov 5 23:46:56.636840 kubelet[3539]: E1105 23:46:56.636499 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-625lj" podUID="486c3bf3-5c4f-4ba5-b692-994994d35c51" Nov 5 23:46:56.826477 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:56400.service - OpenSSH per-connection server daemon (10.200.16.10:56400). Nov 5 23:46:57.285167 sshd[6317]: Accepted publickey for core from 10.200.16.10 port 56400 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:46:57.287723 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:57.291945 systemd-logind[1867]: New session 25 of user core. Nov 5 23:46:57.301770 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 23:46:57.637473 kubelet[3539]: E1105 23:46:57.637199 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7769f64cbc-fbmnx" podUID="1177c853-8b74-4ffe-9eed-6c7edaf39ab6" Nov 5 23:46:57.674610 sshd[6320]: Connection closed by 10.200.16.10 port 56400 Nov 5 23:46:57.675176 sshd-session[6317]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:57.678448 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:56400.service: Deactivated successfully. Nov 5 23:46:57.680038 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 23:46:57.680928 systemd-logind[1867]: Session 25 logged out. Waiting for processes to exit. Nov 5 23:46:57.682681 systemd-logind[1867]: Removed session 25. Nov 5 23:47:02.755076 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:55194.service - OpenSSH per-connection server daemon (10.200.16.10:55194). Nov 5 23:47:03.183651 sshd[6333]: Accepted publickey for core from 10.200.16.10 port 55194 ssh2: RSA SHA256:DGuYmx06TGfTsp0LJydt5PQlzhSbWYw582/T7TjmpIk Nov 5 23:47:03.185272 sshd-session[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:47:03.189263 systemd-logind[1867]: New session 26 of user core. Nov 5 23:47:03.193729 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 23:47:03.539383 sshd[6336]: Connection closed by 10.200.16.10 port 55194 Nov 5 23:47:03.540046 sshd-session[6333]: pam_unix(sshd:session): session closed for user core Nov 5 23:47:03.543218 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:55194.service: Deactivated successfully. Nov 5 23:47:03.544947 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 23:47:03.545771 systemd-logind[1867]: Session 26 logged out. Waiting for processes to exit. Nov 5 23:47:03.547001 systemd-logind[1867]: Removed session 26. Nov 5 23:47:03.637080 kubelet[3539]: E1105 23:47:03.636780 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6d9d55f-kpv2j" podUID="a68e46b0-801c-4548-82e3-d2eb8a4bb9ed" Nov 5 23:47:05.640048 kubelet[3539]: E1105 23:47:05.640009 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b68578ff-w27fl" podUID="9f589f34-97ee-4d82-b7d6-bdd22dcbc743" Nov 5 23:47:05.640998 kubelet[3539]: E1105 23:47:05.640946 3539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6856bc974c-ffszr" podUID="4aba500c-946b-4268-bb47-e30c6e97daba"