Nov 8 00:04:29.200401 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:04:29.200425 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:04:29.200433 kernel: KASLR enabled Nov 8 00:04:29.200439 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 8 00:04:29.200447 kernel: printk: bootconsole [pl11] enabled Nov 8 00:04:29.200453 kernel: efi: EFI v2.7 by EDK II Nov 8 00:04:29.200460 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Nov 8 00:04:29.200466 kernel: random: crng init done Nov 8 00:04:29.200473 kernel: ACPI: Early table checksum verification disabled Nov 8 00:04:29.200479 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Nov 8 00:04:29.200485 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.200491 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.200498 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 8 00:04:29.202550 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202568 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202575 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202582 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202593 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202600 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202607 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 8 00:04:29.202613 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202620 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 8 00:04:29.202626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 8 00:04:29.202633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Nov 8 00:04:29.202639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Nov 8 00:04:29.202646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Nov 8 00:04:29.202652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Nov 8 00:04:29.202659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Nov 8 00:04:29.202667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Nov 8 00:04:29.202673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Nov 8 00:04:29.202680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Nov 8 00:04:29.202687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Nov 8 00:04:29.202693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Nov 8 00:04:29.202699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Nov 8 00:04:29.202706 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Nov 8 00:04:29.202712 kernel: Zone ranges: Nov 8 00:04:29.202719 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 8 00:04:29.202725 kernel: DMA32 empty Nov 8 00:04:29.202732 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:04:29.202738 kernel: Movable zone start for each node Nov 8 00:04:29.202749 kernel: Early memory node ranges Nov 8 00:04:29.202756 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 8 00:04:29.202763 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Nov 8 00:04:29.202770 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Nov 8 00:04:29.202777 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Nov 8 00:04:29.202785 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Nov 8 00:04:29.202792 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Nov 8 00:04:29.202799 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:04:29.202807 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 8 00:04:29.202814 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 8 00:04:29.202821 kernel: psci: probing for conduit method from ACPI. Nov 8 00:04:29.202828 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:04:29.202835 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:04:29.202842 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 8 00:04:29.202848 kernel: psci: SMC Calling Convention v1.4 Nov 8 00:04:29.202855 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 8 00:04:29.202862 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 8 00:04:29.202870 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:04:29.202877 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:04:29.202884 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:04:29.202891 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:04:29.202898 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:04:29.202905 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:04:29.202911 kernel: CPU features: detected: Spectre-BHB Nov 8 00:04:29.202918 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:04:29.202925 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:04:29.202932 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:04:29.202939 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Nov 8 00:04:29.202947 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:04:29.202954 kernel: alternatives: applying boot alternatives Nov 8 00:04:29.202963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:04:29.202970 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:04:29.202977 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:04:29.202984 kernel: Fallback order for Node 0: 0 Nov 8 00:04:29.202991 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Nov 8 00:04:29.202997 kernel: Policy zone: Normal Nov 8 00:04:29.203004 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:04:29.203011 kernel: software IO TLB: area num 2. Nov 8 00:04:29.203018 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Nov 8 00:04:29.203027 kernel: Memory: 3982624K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211536K reserved, 0K cma-reserved) Nov 8 00:04:29.203034 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:04:29.203041 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:04:29.203049 kernel: rcu: RCU event tracing is enabled. Nov 8 00:04:29.203056 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:04:29.203063 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:04:29.203070 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:04:29.203077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:04:29.203084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:04:29.203091 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:04:29.203097 kernel: GICv3: 960 SPIs implemented Nov 8 00:04:29.203106 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:04:29.203113 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:04:29.203120 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 8 00:04:29.203127 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 8 00:04:29.203134 kernel: ITS: No ITS available, not enabling LPIs Nov 8 00:04:29.203141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:04:29.203147 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:04:29.203154 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:04:29.203162 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:04:29.203169 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:04:29.203176 kernel: Console: colour dummy device 80x25 Nov 8 00:04:29.203185 kernel: printk: console [tty1] enabled Nov 8 00:04:29.203192 kernel: ACPI: Core revision 20230628 Nov 8 00:04:29.203199 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:04:29.203206 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:04:29.203214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:04:29.203221 kernel: landlock: Up and running. Nov 8 00:04:29.203228 kernel: SELinux: Initializing. Nov 8 00:04:29.203235 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203242 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203251 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:04:29.203258 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:04:29.203265 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Nov 8 00:04:29.203272 kernel: Hyper-V: Host Build 10.0.26100.1382-1-0 Nov 8 00:04:29.203279 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:04:29.203286 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:04:29.203294 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:04:29.203301 kernel: Remapping and enabling EFI services. Nov 8 00:04:29.203315 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:04:29.203322 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:04:29.203329 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 8 00:04:29.203337 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:04:29.203345 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:04:29.203353 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:04:29.203361 kernel: SMP: Total of 2 processors activated. Nov 8 00:04:29.203368 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:04:29.203376 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 8 00:04:29.203385 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:04:29.203392 kernel: CPU features: detected: CRC32 instructions Nov 8 00:04:29.203400 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:04:29.203407 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:04:29.203414 kernel: CPU features: detected: Privileged Access Never Nov 8 00:04:29.203422 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:04:29.203429 kernel: alternatives: applying system-wide alternatives Nov 8 00:04:29.203437 kernel: devtmpfs: initialized Nov 8 00:04:29.203444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:04:29.203453 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:04:29.203461 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:04:29.203469 kernel: SMBIOS 3.1.0 present. Nov 8 00:04:29.203477 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 8 00:04:29.203485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:04:29.203492 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:04:29.203500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:04:29.203521 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:04:29.203529 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:04:29.203540 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Nov 8 00:04:29.203548 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:04:29.203556 kernel: cpuidle: using governor menu Nov 8 00:04:29.203563 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:04:29.203571 kernel: ASID allocator initialised with 32768 entries Nov 8 00:04:29.203578 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:04:29.203586 kernel: Serial: AMBA PL011 UART driver Nov 8 00:04:29.203593 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:04:29.203601 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:04:29.203610 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:04:29.203617 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:04:29.203625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:04:29.203632 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:04:29.203640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:04:29.203647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:04:29.203655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:04:29.203662 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:04:29.203670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:04:29.203679 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:04:29.203686 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:04:29.203694 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:04:29.203701 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:04:29.203709 kernel: ACPI: Interpreter enabled Nov 8 00:04:29.203716 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:04:29.203724 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:04:29.203731 kernel: printk: console [ttyAMA0] enabled Nov 8 00:04:29.203738 kernel: printk: bootconsole [pl11] disabled Nov 8 00:04:29.203748 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 8 00:04:29.203755 kernel: iommu: Default domain type: Translated Nov 8 00:04:29.203762 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:04:29.203770 kernel: efivars: Registered efivars operations Nov 8 00:04:29.203777 kernel: vgaarb: loaded Nov 8 00:04:29.203785 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:04:29.203792 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:04:29.203799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:04:29.203807 kernel: pnp: PnP ACPI init Nov 8 00:04:29.203815 kernel: pnp: PnP ACPI: found 0 devices Nov 8 00:04:29.203823 kernel: NET: Registered PF_INET protocol family Nov 8 00:04:29.203830 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:04:29.203838 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:04:29.203846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:04:29.203853 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:04:29.203861 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:04:29.203868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:04:29.203876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203885 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:04:29.203900 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:04:29.203908 kernel: kvm [1]: HYP mode not available Nov 8 00:04:29.203915 kernel: Initialise system trusted keyrings Nov 8 00:04:29.203923 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:04:29.203930 kernel: Key type asymmetric registered Nov 8 00:04:29.203938 kernel: Asymmetric key parser 'x509' registered Nov 8 00:04:29.203945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:04:29.203954 kernel: io scheduler mq-deadline registered Nov 8 00:04:29.203962 kernel: io scheduler kyber registered Nov 8 00:04:29.203969 kernel: io scheduler bfq registered Nov 8 00:04:29.203976 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:04:29.203984 kernel: thunder_xcv, ver 1.0 Nov 8 00:04:29.203991 kernel: thunder_bgx, ver 1.0 Nov 8 00:04:29.203998 kernel: nicpf, ver 1.0 Nov 8 00:04:29.204006 kernel: nicvf, ver 1.0 Nov 8 00:04:29.204166 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:04:29.204241 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:04:28 UTC (1762560268) Nov 8 00:04:29.204252 kernel: efifb: probing for efifb Nov 8 00:04:29.204260 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:04:29.204268 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:04:29.204275 kernel: efifb: scrolling: redraw Nov 8 00:04:29.204282 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:04:29.204290 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:04:29.204297 kernel: fb0: EFI VGA frame buffer device Nov 8 00:04:29.204307 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 8 00:04:29.204315 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:04:29.204322 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Nov 8 00:04:29.204330 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:04:29.204337 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:04:29.204345 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:04:29.204352 kernel: Segment Routing with IPv6 Nov 8 00:04:29.204359 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:04:29.204367 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:04:29.204376 kernel: Key type dns_resolver registered Nov 8 00:04:29.204384 kernel: registered taskstats version 1 Nov 8 00:04:29.204391 kernel: Loading compiled-in X.509 certificates Nov 8 00:04:29.204399 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:04:29.204406 kernel: Key type .fscrypt registered Nov 8 00:04:29.204413 kernel: Key type fscrypt-provisioning registered Nov 8 00:04:29.204420 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:04:29.204427 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:04:29.204435 kernel: ima: No architecture policies found Nov 8 00:04:29.204444 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:04:29.204451 kernel: clk: Disabling unused clocks Nov 8 00:04:29.204458 kernel: Freeing unused kernel memory: 39424K Nov 8 00:04:29.204466 kernel: Run /init as init process Nov 8 00:04:29.204473 kernel: with arguments: Nov 8 00:04:29.204480 kernel: /init Nov 8 00:04:29.204488 kernel: with environment: Nov 8 00:04:29.204495 kernel: HOME=/ Nov 8 00:04:29.206527 kernel: TERM=linux Nov 8 00:04:29.206559 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:04:29.206576 systemd[1]: Detected virtualization microsoft. Nov 8 00:04:29.206584 systemd[1]: Detected architecture arm64. Nov 8 00:04:29.206592 systemd[1]: Running in initrd. Nov 8 00:04:29.206600 systemd[1]: No hostname configured, using default hostname. Nov 8 00:04:29.206608 systemd[1]: Hostname set to . Nov 8 00:04:29.206616 systemd[1]: Initializing machine ID from random generator. Nov 8 00:04:29.206626 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:04:29.206635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:29.206643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:29.206652 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:04:29.206661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:04:29.206669 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:04:29.206677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:04:29.206687 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:04:29.206697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:04:29.206705 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:29.206713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:29.206721 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:04:29.206730 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:04:29.206738 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:04:29.206746 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:04:29.206754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:29.206764 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:29.206772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:04:29.206781 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:04:29.206789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:29.206797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:29.206805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:29.206813 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:04:29.206821 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:04:29.206831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:04:29.206839 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:04:29.206847 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:04:29.206855 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:04:29.206890 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:04:29.206912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:04:29.206920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:29.206930 systemd-journald[217]: Journal started Nov 8 00:04:29.206949 systemd-journald[217]: Runtime Journal (/run/log/journal/8ddc4ee57cea453ebd7176b5983f23e5) is 8.0M, max 78.5M, 70.5M free. Nov 8 00:04:29.212389 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:04:29.238599 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:04:29.238647 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:04:29.243246 kernel: Bridge firewalling registered Nov 8 00:04:29.245965 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:29.247067 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:04:29.256828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:29.265915 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:04:29.274306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:29.282176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:29.299785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:29.307640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:04:29.325539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:04:29.335724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:04:29.358562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:29.363800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:04:29.374492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:29.386706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:29.410789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:04:29.423174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:04:29.439383 dracut-cmdline[251]: dracut-dracut-053 Nov 8 00:04:29.449108 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:04:29.439789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:04:29.451261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:29.502777 systemd-resolved[254]: Positive Trust Anchors: Nov 8 00:04:29.502790 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:04:29.502823 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:04:29.505227 systemd-resolved[254]: Defaulting to hostname 'linux'. Nov 8 00:04:29.506279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:04:29.518571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:29.616527 kernel: SCSI subsystem initialized Nov 8 00:04:29.623526 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:04:29.633531 kernel: iscsi: registered transport (tcp) Nov 8 00:04:29.649772 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:04:29.649808 kernel: QLogic iSCSI HBA Driver Nov 8 00:04:29.684248 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:29.697667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:04:29.735780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:04:29.735841 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:04:29.741129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:04:29.788530 kernel: raid6: neonx8 gen() 15801 MB/s Nov 8 00:04:29.807512 kernel: raid6: neonx4 gen() 15691 MB/s Nov 8 00:04:29.826520 kernel: raid6: neonx2 gen() 13315 MB/s Nov 8 00:04:29.846515 kernel: raid6: neonx1 gen() 10491 MB/s Nov 8 00:04:29.865525 kernel: raid6: int64x8 gen() 6979 MB/s Nov 8 00:04:29.885512 kernel: raid6: int64x4 gen() 7371 MB/s Nov 8 00:04:29.905511 kernel: raid6: int64x2 gen() 6145 MB/s Nov 8 00:04:29.927473 kernel: raid6: int64x1 gen() 5074 MB/s Nov 8 00:04:29.927484 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Nov 8 00:04:29.950420 kernel: raid6: .... xor() 12039 MB/s, rmw enabled Nov 8 00:04:29.950431 kernel: raid6: using neon recovery algorithm Nov 8 00:04:29.961604 kernel: xor: measuring software checksum speed Nov 8 00:04:29.961619 kernel: 8regs : 19788 MB/sec Nov 8 00:04:29.964543 kernel: 32regs : 19660 MB/sec Nov 8 00:04:29.967745 kernel: arm64_neon : 27079 MB/sec Nov 8 00:04:29.971257 kernel: xor: using function: arm64_neon (27079 MB/sec) Nov 8 00:04:30.021527 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:04:30.031448 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:30.045646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:30.065857 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 8 00:04:30.070547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:30.084775 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:04:30.105530 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Nov 8 00:04:30.135196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:30.154774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:04:30.193717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:30.209688 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:04:30.233404 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:30.243875 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:30.257888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:30.264041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:04:30.288761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:04:30.312775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:30.336786 kernel: hv_vmbus: Vmbus version:5.3 Nov 8 00:04:30.329601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:30.329744 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:30.386694 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:04:30.386720 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:04:30.386730 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:04:30.386740 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:04:30.386749 kernel: scsi host0: storvsc_host_t Nov 8 00:04:30.386788 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:04:30.386807 kernel: scsi host1: storvsc_host_t Nov 8 00:04:30.345964 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:30.410288 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:04:30.410433 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:04:30.410470 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:04:30.357332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:30.425055 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:04:30.357549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:30.443377 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:04:30.443564 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 8 00:04:30.374473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:30.461814 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: VF slot 1 added Nov 8 00:04:30.461998 kernel: PTP clock support registered Nov 8 00:04:30.443107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:30.462784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:30.496317 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:04:30.496339 kernel: hv_pci 3917e307-176f-4cc3-ac94-cdb9b15ee941: PCI VMBus probing: Using version 0x10004 Nov 8 00:04:30.462893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:30.515399 kernel: hv_pci 3917e307-176f-4cc3-ac94-cdb9b15ee941: PCI host bridge to bus 176f:00 Nov 8 00:04:30.520607 kernel: pci_bus 176f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 8 00:04:30.520770 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:04:30.501792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:30.532158 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:04:30.532194 kernel: pci_bus 176f:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:04:30.539981 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:04:30.540023 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:04:30.545905 kernel: pci 176f:00:02.0: [15b3:1018] type 00 class 0x020000 Nov 8 00:04:30.546804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:30.564184 kernel: pci 176f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:04:30.574943 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:04:30.569884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:30.591234 kernel: pci 176f:00:02.0: enabling Extended Tags Nov 8 00:04:30.591275 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:04:30.591285 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:04:30.591295 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:04:30.963616 systemd-resolved[254]: Clock change detected. Flushing caches. Nov 8 00:04:30.990079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#193 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:04:30.990294 kernel: pci 176f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 176f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Nov 8 00:04:30.996955 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:04:31.002092 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:04:31.006605 kernel: pci_bus 176f:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:04:31.006801 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:04:31.014943 kernel: pci 176f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:04:31.015193 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:04:31.015303 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:04:31.021011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:31.039022 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:31.042393 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:04:31.073971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#232 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:04:31.099053 kernel: mlx5_core 176f:00:02.0: enabling device (0000 -> 0002) Nov 8 00:04:31.099322 kernel: mlx5_core 176f:00:02.0: firmware version: 16.30.5006 Nov 8 00:04:31.307330 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: VF registering: eth1 Nov 8 00:04:31.307549 kernel: mlx5_core 176f:00:02.0 eth1: joined to eth0 Nov 8 00:04:31.313962 kernel: mlx5_core 176f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 8 00:04:31.322959 kernel: mlx5_core 176f:00:02.0 enP5999s1: renamed from eth1 Nov 8 00:04:32.001460 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:04:32.038954 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Nov 8 00:04:32.053131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:04:32.081744 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:04:32.186957 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Nov 8 00:04:32.200652 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:04:32.206142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:04:32.237259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:04:32.256970 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:32.265948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:33.277614 disk-uuid[605]: The operation has completed successfully. Nov 8 00:04:33.282397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:33.344485 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:04:33.349004 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:04:33.383268 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:04:33.395072 sh[718]: Success Nov 8 00:04:33.443411 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:04:34.126432 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:04:34.135077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:04:34.142456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:04:34.179273 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:04:34.179327 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:34.185124 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:04:34.189177 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:04:34.192534 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:04:35.034120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:04:35.039832 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:04:35.060388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:04:35.070322 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:04:35.094755 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:35.094821 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:35.098497 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:04:35.177292 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:35.195113 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:04:35.224698 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:04:35.225965 systemd-networkd[892]: lo: Link UP Nov 8 00:04:35.228962 systemd-networkd[892]: lo: Gained carrier Nov 8 00:04:35.230608 systemd-networkd[892]: Enumeration completed Nov 8 00:04:35.257259 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:35.231689 systemd-networkd[892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:35.231692 systemd-networkd[892]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:04:35.232176 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:04:35.247313 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:04:35.247601 systemd[1]: Reached target network.target - Network. Nov 8 00:04:35.273639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:04:35.301337 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:04:35.369951 kernel: mlx5_core 176f:00:02.0 enP5999s1: Link up Nov 8 00:04:35.409199 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: Data path switched to VF: enP5999s1 Nov 8 00:04:35.409568 systemd-networkd[892]: enP5999s1: Link UP Nov 8 00:04:35.409657 systemd-networkd[892]: eth0: Link UP Nov 8 00:04:35.409790 systemd-networkd[892]: eth0: Gained carrier Nov 8 00:04:35.409799 systemd-networkd[892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:35.430221 systemd-networkd[892]: enP5999s1: Gained carrier Nov 8 00:04:35.442986 systemd-networkd[892]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:04:36.624180 systemd-networkd[892]: eth0: Gained IPv6LL Nov 8 00:04:37.041924 ignition[903]: Ignition 2.19.0 Nov 8 00:04:37.041953 ignition[903]: Stage: fetch-offline Nov 8 00:04:37.048070 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:37.041993 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.042001 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.042090 ignition[903]: parsed url from cmdline: "" Nov 8 00:04:37.042093 ignition[903]: no config URL provided Nov 8 00:04:37.042098 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:04:37.068235 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:04:37.042105 ignition[903]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:04:37.042110 ignition[903]: failed to fetch config: resource requires networking Nov 8 00:04:37.042286 ignition[903]: Ignition finished successfully Nov 8 00:04:37.097572 ignition[912]: Ignition 2.19.0 Nov 8 00:04:37.097579 ignition[912]: Stage: fetch Nov 8 00:04:37.097805 ignition[912]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.097815 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.097926 ignition[912]: parsed url from cmdline: "" Nov 8 00:04:37.097930 ignition[912]: no config URL provided Nov 8 00:04:37.098013 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:04:37.098023 ignition[912]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:04:37.098050 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:04:37.222483 ignition[912]: GET result: OK Nov 8 00:04:37.225054 ignition[912]: config has been read from IMDS userdata Nov 8 00:04:37.225170 ignition[912]: parsing config with SHA512: ba4bcccd80eca9d4aaead328674355a13df2e745c0a004d19a00da7edbbec535059b9eaa7cda125f447ddfc1e5417414ecbc0962421665a8dd114ffb71516889 Nov 8 00:04:37.229673 unknown[912]: fetched base config from "system" Nov 8 00:04:37.229679 unknown[912]: fetched base config from "system" Nov 8 00:04:37.230070 ignition[912]: fetch: fetch complete Nov 8 00:04:37.229684 unknown[912]: fetched user config from "azure" Nov 8 00:04:37.230076 ignition[912]: fetch: fetch passed Nov 8 00:04:37.238111 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:04:37.230119 ignition[912]: Ignition finished successfully Nov 8 00:04:37.259218 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:04:37.286376 ignition[918]: Ignition 2.19.0 Nov 8 00:04:37.286385 ignition[918]: Stage: kargs Nov 8 00:04:37.286585 ignition[918]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.295195 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:04:37.286595 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.287487 ignition[918]: kargs: kargs passed Nov 8 00:04:37.287537 ignition[918]: Ignition finished successfully Nov 8 00:04:37.320275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:04:37.340720 ignition[924]: Ignition 2.19.0 Nov 8 00:04:37.340730 ignition[924]: Stage: disks Nov 8 00:04:37.345374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:04:37.340917 ignition[924]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.351825 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:37.340928 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.361495 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:04:37.341849 ignition[924]: disks: disks passed Nov 8 00:04:37.371327 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:04:37.341898 ignition[924]: Ignition finished successfully Nov 8 00:04:37.383999 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:04:37.393653 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:04:37.414232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:04:37.484352 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:04:37.492574 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:04:37.506189 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:04:37.562962 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:04:37.563265 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:04:37.567351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:04:37.644017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:04:37.663950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Nov 8 00:04:37.676279 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:37.676336 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:37.679832 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:04:37.687978 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:04:37.687142 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:04:37.694164 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:04:37.702204 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:04:37.702243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:37.708667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:04:37.716635 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:04:37.733164 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:04:38.786547 coreos-metadata[960]: Nov 08 00:04:38.786 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:04:38.793223 coreos-metadata[960]: Nov 08 00:04:38.792 INFO Fetch successful Nov 8 00:04:38.793223 coreos-metadata[960]: Nov 08 00:04:38.792 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:04:38.807888 coreos-metadata[960]: Nov 08 00:04:38.807 INFO Fetch successful Nov 8 00:04:38.813187 coreos-metadata[960]: Nov 08 00:04:38.813 INFO wrote hostname ci-4081.3.6-n-5561f33395 to /sysroot/etc/hostname Nov 8 00:04:38.814576 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:04:39.394385 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:04:39.493950 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:04:39.527493 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:04:39.563133 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:04:41.371705 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:41.388173 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:04:41.399227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:04:41.413249 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:41.411732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:04:41.441830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:04:41.447238 ignition[1063]: INFO : Ignition 2.19.0 Nov 8 00:04:41.447238 ignition[1063]: INFO : Stage: mount Nov 8 00:04:41.447238 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:41.447238 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:41.447238 ignition[1063]: INFO : mount: mount passed Nov 8 00:04:41.447238 ignition[1063]: INFO : Ignition finished successfully Nov 8 00:04:41.452327 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:04:41.476195 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:04:41.504126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:04:41.530543 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Nov 8 00:04:41.530599 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:41.535603 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:41.539609 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:04:41.546946 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:04:41.548386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:04:41.571129 ignition[1092]: INFO : Ignition 2.19.0 Nov 8 00:04:41.571129 ignition[1092]: INFO : Stage: files Nov 8 00:04:41.577490 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:41.577490 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:41.577490 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:04:41.601713 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:04:41.601713 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:04:41.783087 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:04:41.789270 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:04:41.789270 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:04:41.783457 unknown[1092]: wrote ssh authorized keys file for user: core Nov 8 00:04:41.844840 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:04:41.853328 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 8 00:04:41.884586 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:04:41.951979 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 8 00:04:42.487123 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:04:42.751531 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:42.751531 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:04:42.965968 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: files passed Nov 8 00:04:42.975209 ignition[1092]: INFO : Ignition finished successfully Nov 8 00:04:42.975634 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:04:43.008275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:04:43.023130 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:04:43.049437 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:04:43.049558 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:04:43.171663 initrd-setup-root-after-ignition[1119]: grep: Nov 8 00:04:43.171663 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.182252 initrd-setup-root-after-ignition[1119]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.182252 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.182896 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:43.194419 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:04:43.217956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:04:43.250557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:04:43.252987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:04:43.260944 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:04:43.271427 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:04:43.281411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:04:43.294217 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:04:43.322685 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:43.339186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:04:43.361344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:43.367343 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:43.385351 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:04:43.394851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:04:43.394998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:43.409107 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:04:43.419516 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:04:43.428153 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:04:43.436402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:43.446854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:43.457312 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:04:43.466895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:43.478695 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:04:43.488046 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:04:43.497403 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:04:43.505647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:04:43.505783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:43.519070 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:43.524696 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:43.534446 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:04:43.538756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:43.544451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:04:43.544576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:43.557838 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:04:43.557959 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:43.563299 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:04:43.563389 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:04:43.573215 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:04:43.573308 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:04:43.602237 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:04:43.612247 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:04:43.612464 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:43.645828 ignition[1143]: INFO : Ignition 2.19.0 Nov 8 00:04:43.645828 ignition[1143]: INFO : Stage: umount Nov 8 00:04:43.645828 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:43.645828 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:43.645828 ignition[1143]: INFO : umount: umount passed Nov 8 00:04:43.645828 ignition[1143]: INFO : Ignition finished successfully Nov 8 00:04:43.646175 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:04:43.651120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:04:43.651283 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:43.659878 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:04:43.660008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:43.670288 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:04:43.671975 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:04:43.680128 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:04:43.680395 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:04:43.688790 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:04:43.688883 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:04:43.697897 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:04:43.697961 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:04:43.705797 systemd[1]: Stopped target network.target - Network. Nov 8 00:04:43.709890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:04:43.709977 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:43.722179 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:04:43.730491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:04:43.739919 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:43.745696 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:04:43.749999 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:04:43.754125 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:04:43.754186 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:43.763056 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:04:43.763101 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:43.771249 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:04:43.771302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:04:43.779818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:04:43.779856 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:43.788647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:04:43.798706 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:04:43.808248 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:04:43.809048 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:04:43.809171 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:04:43.818876 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:04:43.819047 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:04:43.827615 systemd-networkd[892]: eth0: DHCPv6 lease lost Nov 8 00:04:44.021161 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: Data path switched from VF: enP5999s1 Nov 8 00:04:43.829190 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:04:43.830964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:04:43.840798 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:04:43.841976 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:04:43.850550 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:04:43.850660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:43.857923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:04:43.858065 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:43.883163 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:04:43.890831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:04:43.890915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:43.900598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:04:43.900649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:43.909025 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:04:43.909067 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:43.918015 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:04:43.918058 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:43.926886 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:43.948763 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:04:43.949016 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:43.958520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:04:43.958565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:43.967309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:04:43.967355 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:43.976154 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:04:43.976210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:43.988867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:04:43.988921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:44.008971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:44.009030 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:44.048213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:04:44.053260 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:04:44.053327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:44.059541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:44.059584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:44.071798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:04:44.072022 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:04:44.101207 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:04:44.101349 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:04:44.110177 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:04:44.132104 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:04:44.175018 systemd[1]: Switching root. Nov 8 00:04:44.413819 systemd-journald[217]: Journal stopped Nov 8 00:04:29.200401 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:04:29.200425 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:04:29.200433 kernel: KASLR enabled Nov 8 00:04:29.200439 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 8 00:04:29.200447 kernel: printk: bootconsole [pl11] enabled Nov 8 00:04:29.200453 kernel: efi: EFI v2.7 by EDK II Nov 8 00:04:29.200460 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Nov 8 00:04:29.200466 kernel: random: crng init done Nov 8 00:04:29.200473 kernel: ACPI: Early table checksum verification disabled Nov 8 00:04:29.200479 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Nov 8 00:04:29.200485 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.200491 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.200498 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 8 00:04:29.202550 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202568 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202575 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202582 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202593 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202600 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202607 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 8 00:04:29.202613 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:04:29.202620 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 8 00:04:29.202626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 8 00:04:29.202633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Nov 8 00:04:29.202639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Nov 8 00:04:29.202646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Nov 8 00:04:29.202652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Nov 8 00:04:29.202659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Nov 8 00:04:29.202667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Nov 8 00:04:29.202673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Nov 8 00:04:29.202680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Nov 8 00:04:29.202687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Nov 8 00:04:29.202693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Nov 8 00:04:29.202699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Nov 8 00:04:29.202706 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Nov 8 00:04:29.202712 kernel: Zone ranges: Nov 8 00:04:29.202719 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 8 00:04:29.202725 kernel: DMA32 empty Nov 8 00:04:29.202732 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:04:29.202738 kernel: Movable zone start for each node Nov 8 00:04:29.202749 kernel: Early memory node ranges Nov 8 00:04:29.202756 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 8 00:04:29.202763 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Nov 8 00:04:29.202770 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Nov 8 00:04:29.202777 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Nov 8 00:04:29.202785 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Nov 8 00:04:29.202792 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Nov 8 00:04:29.202799 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:04:29.202807 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 8 00:04:29.202814 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 8 00:04:29.202821 kernel: psci: probing for conduit method from ACPI. Nov 8 00:04:29.202828 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:04:29.202835 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:04:29.202842 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 8 00:04:29.202848 kernel: psci: SMC Calling Convention v1.4 Nov 8 00:04:29.202855 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 8 00:04:29.202862 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 8 00:04:29.202870 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:04:29.202877 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:04:29.202884 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:04:29.202891 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:04:29.202898 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:04:29.202905 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:04:29.202911 kernel: CPU features: detected: Spectre-BHB Nov 8 00:04:29.202918 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:04:29.202925 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:04:29.202932 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:04:29.202939 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Nov 8 00:04:29.202947 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:04:29.202954 kernel: alternatives: applying boot alternatives Nov 8 00:04:29.202963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:04:29.202970 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:04:29.202977 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:04:29.202984 kernel: Fallback order for Node 0: 0 Nov 8 00:04:29.202991 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Nov 8 00:04:29.202997 kernel: Policy zone: Normal Nov 8 00:04:29.203004 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:04:29.203011 kernel: software IO TLB: area num 2. Nov 8 00:04:29.203018 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Nov 8 00:04:29.203027 kernel: Memory: 3982624K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211536K reserved, 0K cma-reserved) Nov 8 00:04:29.203034 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:04:29.203041 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:04:29.203049 kernel: rcu: RCU event tracing is enabled. Nov 8 00:04:29.203056 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:04:29.203063 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:04:29.203070 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:04:29.203077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:04:29.203084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:04:29.203091 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:04:29.203097 kernel: GICv3: 960 SPIs implemented Nov 8 00:04:29.203106 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:04:29.203113 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:04:29.203120 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 8 00:04:29.203127 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 8 00:04:29.203134 kernel: ITS: No ITS available, not enabling LPIs Nov 8 00:04:29.203141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:04:29.203147 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:04:29.203154 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:04:29.203162 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:04:29.203169 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:04:29.203176 kernel: Console: colour dummy device 80x25 Nov 8 00:04:29.203185 kernel: printk: console [tty1] enabled Nov 8 00:04:29.203192 kernel: ACPI: Core revision 20230628 Nov 8 00:04:29.203199 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:04:29.203206 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:04:29.203214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:04:29.203221 kernel: landlock: Up and running. Nov 8 00:04:29.203228 kernel: SELinux: Initializing. Nov 8 00:04:29.203235 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203242 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203251 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:04:29.203258 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:04:29.203265 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Nov 8 00:04:29.203272 kernel: Hyper-V: Host Build 10.0.26100.1382-1-0 Nov 8 00:04:29.203279 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:04:29.203286 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:04:29.203294 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:04:29.203301 kernel: Remapping and enabling EFI services. Nov 8 00:04:29.203315 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:04:29.203322 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:04:29.203329 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 8 00:04:29.203337 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:04:29.203345 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:04:29.203353 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:04:29.203361 kernel: SMP: Total of 2 processors activated. Nov 8 00:04:29.203368 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:04:29.203376 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 8 00:04:29.203385 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:04:29.203392 kernel: CPU features: detected: CRC32 instructions Nov 8 00:04:29.203400 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:04:29.203407 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:04:29.203414 kernel: CPU features: detected: Privileged Access Never Nov 8 00:04:29.203422 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:04:29.203429 kernel: alternatives: applying system-wide alternatives Nov 8 00:04:29.203437 kernel: devtmpfs: initialized Nov 8 00:04:29.203444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:04:29.203453 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:04:29.203461 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:04:29.203469 kernel: SMBIOS 3.1.0 present. Nov 8 00:04:29.203477 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 8 00:04:29.203485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:04:29.203492 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:04:29.203500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:04:29.203521 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:04:29.203529 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:04:29.203540 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Nov 8 00:04:29.203548 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:04:29.203556 kernel: cpuidle: using governor menu Nov 8 00:04:29.203563 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:04:29.203571 kernel: ASID allocator initialised with 32768 entries Nov 8 00:04:29.203578 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:04:29.203586 kernel: Serial: AMBA PL011 UART driver Nov 8 00:04:29.203593 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:04:29.203601 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:04:29.203610 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:04:29.203617 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:04:29.203625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:04:29.203632 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:04:29.203640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:04:29.203647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:04:29.203655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:04:29.203662 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:04:29.203670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:04:29.203679 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:04:29.203686 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:04:29.203694 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:04:29.203701 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:04:29.203709 kernel: ACPI: Interpreter enabled Nov 8 00:04:29.203716 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:04:29.203724 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:04:29.203731 kernel: printk: console [ttyAMA0] enabled Nov 8 00:04:29.203738 kernel: printk: bootconsole [pl11] disabled Nov 8 00:04:29.203748 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 8 00:04:29.203755 kernel: iommu: Default domain type: Translated Nov 8 00:04:29.203762 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:04:29.203770 kernel: efivars: Registered efivars operations Nov 8 00:04:29.203777 kernel: vgaarb: loaded Nov 8 00:04:29.203785 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:04:29.203792 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:04:29.203799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:04:29.203807 kernel: pnp: PnP ACPI init Nov 8 00:04:29.203815 kernel: pnp: PnP ACPI: found 0 devices Nov 8 00:04:29.203823 kernel: NET: Registered PF_INET protocol family Nov 8 00:04:29.203830 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:04:29.203838 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:04:29.203846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:04:29.203853 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:04:29.203861 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:04:29.203868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:04:29.203876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203885 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:04:29.203893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:04:29.203900 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:04:29.203908 kernel: kvm [1]: HYP mode not available Nov 8 00:04:29.203915 kernel: Initialise system trusted keyrings Nov 8 00:04:29.203923 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:04:29.203930 kernel: Key type asymmetric registered Nov 8 00:04:29.203938 kernel: Asymmetric key parser 'x509' registered Nov 8 00:04:29.203945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:04:29.203954 kernel: io scheduler mq-deadline registered Nov 8 00:04:29.203962 kernel: io scheduler kyber registered Nov 8 00:04:29.203969 kernel: io scheduler bfq registered Nov 8 00:04:29.203976 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:04:29.203984 kernel: thunder_xcv, ver 1.0 Nov 8 00:04:29.203991 kernel: thunder_bgx, ver 1.0 Nov 8 00:04:29.203998 kernel: nicpf, ver 1.0 Nov 8 00:04:29.204006 kernel: nicvf, ver 1.0 Nov 8 00:04:29.204166 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:04:29.204241 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:04:28 UTC (1762560268) Nov 8 00:04:29.204252 kernel: efifb: probing for efifb Nov 8 00:04:29.204260 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:04:29.204268 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:04:29.204275 kernel: efifb: scrolling: redraw Nov 8 00:04:29.204282 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:04:29.204290 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:04:29.204297 kernel: fb0: EFI VGA frame buffer device Nov 8 00:04:29.204307 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 8 00:04:29.204315 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:04:29.204322 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Nov 8 00:04:29.204330 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:04:29.204337 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:04:29.204345 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:04:29.204352 kernel: Segment Routing with IPv6 Nov 8 00:04:29.204359 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:04:29.204367 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:04:29.204376 kernel: Key type dns_resolver registered Nov 8 00:04:29.204384 kernel: registered taskstats version 1 Nov 8 00:04:29.204391 kernel: Loading compiled-in X.509 certificates Nov 8 00:04:29.204399 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:04:29.204406 kernel: Key type .fscrypt registered Nov 8 00:04:29.204413 kernel: Key type fscrypt-provisioning registered Nov 8 00:04:29.204420 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:04:29.204427 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:04:29.204435 kernel: ima: No architecture policies found Nov 8 00:04:29.204444 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:04:29.204451 kernel: clk: Disabling unused clocks Nov 8 00:04:29.204458 kernel: Freeing unused kernel memory: 39424K Nov 8 00:04:29.204466 kernel: Run /init as init process Nov 8 00:04:29.204473 kernel: with arguments: Nov 8 00:04:29.204480 kernel: /init Nov 8 00:04:29.204488 kernel: with environment: Nov 8 00:04:29.204495 kernel: HOME=/ Nov 8 00:04:29.206527 kernel: TERM=linux Nov 8 00:04:29.206559 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:04:29.206576 systemd[1]: Detected virtualization microsoft. Nov 8 00:04:29.206584 systemd[1]: Detected architecture arm64. Nov 8 00:04:29.206592 systemd[1]: Running in initrd. Nov 8 00:04:29.206600 systemd[1]: No hostname configured, using default hostname. Nov 8 00:04:29.206608 systemd[1]: Hostname set to . Nov 8 00:04:29.206616 systemd[1]: Initializing machine ID from random generator. Nov 8 00:04:29.206626 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:04:29.206635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:29.206643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:29.206652 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:04:29.206661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:04:29.206669 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:04:29.206677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:04:29.206687 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:04:29.206697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:04:29.206705 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:29.206713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:29.206721 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:04:29.206730 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:04:29.206738 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:04:29.206746 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:04:29.206754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:29.206764 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:29.206772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:04:29.206781 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:04:29.206789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:29.206797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:29.206805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:29.206813 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:04:29.206821 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:04:29.206831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:04:29.206839 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:04:29.206847 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:04:29.206855 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:04:29.206890 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:04:29.206912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:04:29.206920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:29.206930 systemd-journald[217]: Journal started Nov 8 00:04:29.206949 systemd-journald[217]: Runtime Journal (/run/log/journal/8ddc4ee57cea453ebd7176b5983f23e5) is 8.0M, max 78.5M, 70.5M free. Nov 8 00:04:29.212389 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:04:29.238599 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:04:29.238647 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:04:29.243246 kernel: Bridge firewalling registered Nov 8 00:04:29.245965 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:29.247067 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:04:29.256828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:29.265915 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:04:29.274306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:29.282176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:29.299785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:29.307640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:04:29.325539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:04:29.335724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:04:29.358562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:29.363800 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:04:29.374492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:29.386706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:29.410789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:04:29.423174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:04:29.439383 dracut-cmdline[251]: dracut-dracut-053 Nov 8 00:04:29.449108 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:04:29.439789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:04:29.451261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:29.502777 systemd-resolved[254]: Positive Trust Anchors: Nov 8 00:04:29.502790 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:04:29.502823 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:04:29.505227 systemd-resolved[254]: Defaulting to hostname 'linux'. Nov 8 00:04:29.506279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:04:29.518571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:29.616527 kernel: SCSI subsystem initialized Nov 8 00:04:29.623526 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:04:29.633531 kernel: iscsi: registered transport (tcp) Nov 8 00:04:29.649772 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:04:29.649808 kernel: QLogic iSCSI HBA Driver Nov 8 00:04:29.684248 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:29.697667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:04:29.735780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:04:29.735841 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:04:29.741129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:04:29.788530 kernel: raid6: neonx8 gen() 15801 MB/s Nov 8 00:04:29.807512 kernel: raid6: neonx4 gen() 15691 MB/s Nov 8 00:04:29.826520 kernel: raid6: neonx2 gen() 13315 MB/s Nov 8 00:04:29.846515 kernel: raid6: neonx1 gen() 10491 MB/s Nov 8 00:04:29.865525 kernel: raid6: int64x8 gen() 6979 MB/s Nov 8 00:04:29.885512 kernel: raid6: int64x4 gen() 7371 MB/s Nov 8 00:04:29.905511 kernel: raid6: int64x2 gen() 6145 MB/s Nov 8 00:04:29.927473 kernel: raid6: int64x1 gen() 5074 MB/s Nov 8 00:04:29.927484 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Nov 8 00:04:29.950420 kernel: raid6: .... xor() 12039 MB/s, rmw enabled Nov 8 00:04:29.950431 kernel: raid6: using neon recovery algorithm Nov 8 00:04:29.961604 kernel: xor: measuring software checksum speed Nov 8 00:04:29.961619 kernel: 8regs : 19788 MB/sec Nov 8 00:04:29.964543 kernel: 32regs : 19660 MB/sec Nov 8 00:04:29.967745 kernel: arm64_neon : 27079 MB/sec Nov 8 00:04:29.971257 kernel: xor: using function: arm64_neon (27079 MB/sec) Nov 8 00:04:30.021527 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:04:30.031448 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:30.045646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:30.065857 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 8 00:04:30.070547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:30.084775 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:04:30.105530 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Nov 8 00:04:30.135196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:30.154774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:04:30.193717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:30.209688 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:04:30.233404 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:30.243875 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:30.257888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:30.264041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:04:30.288761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:04:30.312775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:30.336786 kernel: hv_vmbus: Vmbus version:5.3 Nov 8 00:04:30.329601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:30.329744 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:30.386694 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:04:30.386720 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:04:30.386730 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:04:30.386740 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:04:30.386749 kernel: scsi host0: storvsc_host_t Nov 8 00:04:30.386788 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:04:30.386807 kernel: scsi host1: storvsc_host_t Nov 8 00:04:30.345964 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:30.410288 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:04:30.410433 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:04:30.410470 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:04:30.357332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:30.425055 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:04:30.357549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:30.443377 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:04:30.443564 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 8 00:04:30.374473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:30.461814 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: VF slot 1 added Nov 8 00:04:30.461998 kernel: PTP clock support registered Nov 8 00:04:30.443107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:30.462784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:30.496317 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:04:30.496339 kernel: hv_pci 3917e307-176f-4cc3-ac94-cdb9b15ee941: PCI VMBus probing: Using version 0x10004 Nov 8 00:04:30.462893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:30.515399 kernel: hv_pci 3917e307-176f-4cc3-ac94-cdb9b15ee941: PCI host bridge to bus 176f:00 Nov 8 00:04:30.520607 kernel: pci_bus 176f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 8 00:04:30.520770 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:04:30.501792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:30.532158 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:04:30.532194 kernel: pci_bus 176f:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:04:30.539981 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:04:30.540023 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:04:30.545905 kernel: pci 176f:00:02.0: [15b3:1018] type 00 class 0x020000 Nov 8 00:04:30.546804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:30.564184 kernel: pci 176f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:04:30.574943 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:04:30.569884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:30.591234 kernel: pci 176f:00:02.0: enabling Extended Tags Nov 8 00:04:30.591275 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:04:30.591285 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:04:30.591295 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:04:30.963616 systemd-resolved[254]: Clock change detected. Flushing caches. Nov 8 00:04:30.990079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#193 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:04:30.990294 kernel: pci 176f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 176f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Nov 8 00:04:30.996955 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:04:31.002092 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:04:31.006605 kernel: pci_bus 176f:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:04:31.006801 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:04:31.014943 kernel: pci 176f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:04:31.015193 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:04:31.015303 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:04:31.021011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:31.039022 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:31.042393 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:04:31.073971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#232 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:04:31.099053 kernel: mlx5_core 176f:00:02.0: enabling device (0000 -> 0002) Nov 8 00:04:31.099322 kernel: mlx5_core 176f:00:02.0: firmware version: 16.30.5006 Nov 8 00:04:31.307330 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: VF registering: eth1 Nov 8 00:04:31.307549 kernel: mlx5_core 176f:00:02.0 eth1: joined to eth0 Nov 8 00:04:31.313962 kernel: mlx5_core 176f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 8 00:04:31.322959 kernel: mlx5_core 176f:00:02.0 enP5999s1: renamed from eth1 Nov 8 00:04:32.001460 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:04:32.038954 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Nov 8 00:04:32.053131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:04:32.081744 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:04:32.186957 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Nov 8 00:04:32.200652 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:04:32.206142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:04:32.237259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:04:32.256970 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:32.265948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:33.277614 disk-uuid[605]: The operation has completed successfully. Nov 8 00:04:33.282397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:04:33.344485 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:04:33.349004 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:04:33.383268 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:04:33.395072 sh[718]: Success Nov 8 00:04:33.443411 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:04:34.126432 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:04:34.135077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:04:34.142456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:04:34.179273 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:04:34.179327 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:34.185124 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:04:34.189177 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:04:34.192534 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:04:35.034120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:04:35.039832 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:04:35.060388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:04:35.070322 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:04:35.094755 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:35.094821 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:35.098497 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:04:35.177292 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:35.195113 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:04:35.224698 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:04:35.225965 systemd-networkd[892]: lo: Link UP Nov 8 00:04:35.228962 systemd-networkd[892]: lo: Gained carrier Nov 8 00:04:35.230608 systemd-networkd[892]: Enumeration completed Nov 8 00:04:35.257259 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:35.231689 systemd-networkd[892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:35.231692 systemd-networkd[892]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:04:35.232176 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:04:35.247313 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:04:35.247601 systemd[1]: Reached target network.target - Network. Nov 8 00:04:35.273639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:04:35.301337 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:04:35.369951 kernel: mlx5_core 176f:00:02.0 enP5999s1: Link up Nov 8 00:04:35.409199 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: Data path switched to VF: enP5999s1 Nov 8 00:04:35.409568 systemd-networkd[892]: enP5999s1: Link UP Nov 8 00:04:35.409657 systemd-networkd[892]: eth0: Link UP Nov 8 00:04:35.409790 systemd-networkd[892]: eth0: Gained carrier Nov 8 00:04:35.409799 systemd-networkd[892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:35.430221 systemd-networkd[892]: enP5999s1: Gained carrier Nov 8 00:04:35.442986 systemd-networkd[892]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:04:36.624180 systemd-networkd[892]: eth0: Gained IPv6LL Nov 8 00:04:37.041924 ignition[903]: Ignition 2.19.0 Nov 8 00:04:37.041953 ignition[903]: Stage: fetch-offline Nov 8 00:04:37.048070 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:37.041993 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.042001 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.042090 ignition[903]: parsed url from cmdline: "" Nov 8 00:04:37.042093 ignition[903]: no config URL provided Nov 8 00:04:37.042098 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:04:37.068235 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:04:37.042105 ignition[903]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:04:37.042110 ignition[903]: failed to fetch config: resource requires networking Nov 8 00:04:37.042286 ignition[903]: Ignition finished successfully Nov 8 00:04:37.097572 ignition[912]: Ignition 2.19.0 Nov 8 00:04:37.097579 ignition[912]: Stage: fetch Nov 8 00:04:37.097805 ignition[912]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.097815 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.097926 ignition[912]: parsed url from cmdline: "" Nov 8 00:04:37.097930 ignition[912]: no config URL provided Nov 8 00:04:37.098013 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:04:37.098023 ignition[912]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:04:37.098050 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:04:37.222483 ignition[912]: GET result: OK Nov 8 00:04:37.225054 ignition[912]: config has been read from IMDS userdata Nov 8 00:04:37.225170 ignition[912]: parsing config with SHA512: ba4bcccd80eca9d4aaead328674355a13df2e745c0a004d19a00da7edbbec535059b9eaa7cda125f447ddfc1e5417414ecbc0962421665a8dd114ffb71516889 Nov 8 00:04:37.229673 unknown[912]: fetched base config from "system" Nov 8 00:04:37.229679 unknown[912]: fetched base config from "system" Nov 8 00:04:37.230070 ignition[912]: fetch: fetch complete Nov 8 00:04:37.229684 unknown[912]: fetched user config from "azure" Nov 8 00:04:37.230076 ignition[912]: fetch: fetch passed Nov 8 00:04:37.238111 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:04:37.230119 ignition[912]: Ignition finished successfully Nov 8 00:04:37.259218 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:04:37.286376 ignition[918]: Ignition 2.19.0 Nov 8 00:04:37.286385 ignition[918]: Stage: kargs Nov 8 00:04:37.286585 ignition[918]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.295195 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:04:37.286595 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.287487 ignition[918]: kargs: kargs passed Nov 8 00:04:37.287537 ignition[918]: Ignition finished successfully Nov 8 00:04:37.320275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:04:37.340720 ignition[924]: Ignition 2.19.0 Nov 8 00:04:37.340730 ignition[924]: Stage: disks Nov 8 00:04:37.345374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:04:37.340917 ignition[924]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:37.351825 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:37.340928 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:37.361495 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:04:37.341849 ignition[924]: disks: disks passed Nov 8 00:04:37.371327 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:04:37.341898 ignition[924]: Ignition finished successfully Nov 8 00:04:37.383999 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:04:37.393653 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:04:37.414232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:04:37.484352 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:04:37.492574 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:04:37.506189 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:04:37.562962 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:04:37.563265 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:04:37.567351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:04:37.644017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:04:37.663950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Nov 8 00:04:37.676279 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:37.676336 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:37.679832 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:04:37.687978 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:04:37.687142 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:04:37.694164 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:04:37.702204 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:04:37.702243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:37.708667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:04:37.716635 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:04:37.733164 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:04:38.786547 coreos-metadata[960]: Nov 08 00:04:38.786 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:04:38.793223 coreos-metadata[960]: Nov 08 00:04:38.792 INFO Fetch successful Nov 8 00:04:38.793223 coreos-metadata[960]: Nov 08 00:04:38.792 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:04:38.807888 coreos-metadata[960]: Nov 08 00:04:38.807 INFO Fetch successful Nov 8 00:04:38.813187 coreos-metadata[960]: Nov 08 00:04:38.813 INFO wrote hostname ci-4081.3.6-n-5561f33395 to /sysroot/etc/hostname Nov 8 00:04:38.814576 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:04:39.394385 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:04:39.493950 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:04:39.527493 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:04:39.563133 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:04:41.371705 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:41.388173 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:04:41.399227 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:04:41.413249 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:41.411732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:04:41.441830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:04:41.447238 ignition[1063]: INFO : Ignition 2.19.0 Nov 8 00:04:41.447238 ignition[1063]: INFO : Stage: mount Nov 8 00:04:41.447238 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:41.447238 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:41.447238 ignition[1063]: INFO : mount: mount passed Nov 8 00:04:41.447238 ignition[1063]: INFO : Ignition finished successfully Nov 8 00:04:41.452327 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:04:41.476195 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:04:41.504126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:04:41.530543 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Nov 8 00:04:41.530599 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:41.535603 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:41.539609 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:04:41.546946 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:04:41.548386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:04:41.571129 ignition[1092]: INFO : Ignition 2.19.0 Nov 8 00:04:41.571129 ignition[1092]: INFO : Stage: files Nov 8 00:04:41.577490 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:41.577490 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:41.577490 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:04:41.601713 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:04:41.601713 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:04:41.783087 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:04:41.789270 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:04:41.789270 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:04:41.783457 unknown[1092]: wrote ssh authorized keys file for user: core Nov 8 00:04:41.844840 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:04:41.853328 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 8 00:04:41.884586 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:04:41.951979 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:41.960986 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 8 00:04:42.487123 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:04:42.751531 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:42.751531 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:04:42.965968 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:42.975209 ignition[1092]: INFO : files: files passed Nov 8 00:04:42.975209 ignition[1092]: INFO : Ignition finished successfully Nov 8 00:04:42.975634 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:04:43.008275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:04:43.023130 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:04:43.049437 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:04:43.049558 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:04:43.171663 initrd-setup-root-after-ignition[1119]: grep: Nov 8 00:04:43.171663 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.182252 initrd-setup-root-after-ignition[1119]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.182252 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.182896 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:43.194419 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:04:43.217956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:04:43.250557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:04:43.252987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:04:43.260944 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:04:43.271427 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:04:43.281411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:04:43.294217 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:04:43.322685 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:43.339186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:04:43.361344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:43.367343 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:43.385351 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:04:43.394851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:04:43.394998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:43.409107 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:04:43.419516 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:04:43.428153 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:04:43.436402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:43.446854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:43.457312 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:04:43.466895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:43.478695 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:04:43.488046 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:04:43.497403 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:04:43.505647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:04:43.505783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:43.519070 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:43.524696 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:43.534446 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:04:43.538756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:43.544451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:04:43.544576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:43.557838 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:04:43.557959 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:43.563299 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:04:43.563389 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:04:43.573215 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:04:43.573308 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:04:43.602237 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:04:43.612247 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:04:43.612464 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:43.645828 ignition[1143]: INFO : Ignition 2.19.0 Nov 8 00:04:43.645828 ignition[1143]: INFO : Stage: umount Nov 8 00:04:43.645828 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:43.645828 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:04:43.645828 ignition[1143]: INFO : umount: umount passed Nov 8 00:04:43.645828 ignition[1143]: INFO : Ignition finished successfully Nov 8 00:04:43.646175 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:04:43.651120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:04:43.651283 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:43.659878 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:04:43.660008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:43.670288 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:04:43.671975 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:04:43.680128 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:04:43.680395 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:04:43.688790 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:04:43.688883 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:04:43.697897 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:04:43.697961 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:04:43.705797 systemd[1]: Stopped target network.target - Network. Nov 8 00:04:43.709890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:04:43.709977 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:43.722179 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:04:43.730491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:04:43.739919 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:43.745696 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:04:43.749999 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:04:43.754125 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:04:43.754186 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:43.763056 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:04:43.763101 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:43.771249 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:04:43.771302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:04:43.779818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:04:43.779856 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:43.788647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:04:43.798706 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:04:43.808248 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:04:43.809048 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:04:43.809171 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:04:43.818876 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:04:43.819047 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:04:43.827615 systemd-networkd[892]: eth0: DHCPv6 lease lost Nov 8 00:04:44.021161 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: Data path switched from VF: enP5999s1 Nov 8 00:04:43.829190 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:04:43.830964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:04:43.840798 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:04:43.841976 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:04:43.850550 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:04:43.850660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:43.857923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:04:43.858065 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:43.883163 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:04:43.890831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:04:43.890915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:43.900598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:04:43.900649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:43.909025 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:04:43.909067 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:43.918015 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:04:43.918058 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:43.926886 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:43.948763 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:04:43.949016 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:43.958520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:04:43.958565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:43.967309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:04:43.967355 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:43.976154 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:04:43.976210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:43.988867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:04:43.988921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:44.008971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:44.009030 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:44.048213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:04:44.053260 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:04:44.053327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:44.059541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:44.059584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:44.071798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:04:44.072022 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:04:44.101207 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:04:44.101349 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:04:44.110177 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:04:44.132104 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:04:44.175018 systemd[1]: Switching root. Nov 8 00:04:44.413819 systemd-journald[217]: Journal stopped Nov 8 00:04:59.290163 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Nov 8 00:04:59.290190 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:04:59.290200 kernel: SELinux: policy capability open_perms=1 Nov 8 00:04:59.290211 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:04:59.290219 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:04:59.290227 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:04:59.290236 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:04:59.290244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:04:59.290252 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:04:59.290260 kernel: audit: type=1403 audit(1762560287.261:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:04:59.290271 systemd[1]: Successfully loaded SELinux policy in 311.968ms. Nov 8 00:04:59.290282 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.645ms. Nov 8 00:04:59.290292 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:04:59.290302 systemd[1]: Detected virtualization microsoft. Nov 8 00:04:59.290312 systemd[1]: Detected architecture arm64. Nov 8 00:04:59.290325 systemd[1]: Detected first boot. Nov 8 00:04:59.290334 systemd[1]: Hostname set to . Nov 8 00:04:59.290344 systemd[1]: Initializing machine ID from random generator. Nov 8 00:04:59.290354 zram_generator::config[1188]: No configuration found. Nov 8 00:04:59.290364 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:04:59.290373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:04:59.290384 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:04:59.290394 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:04:59.290405 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:04:59.290414 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:04:59.290424 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:04:59.290434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:04:59.290443 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:04:59.290455 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:04:59.290465 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:04:59.290475 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:04:59.290484 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:59.290494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:59.290504 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:04:59.290514 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:04:59.290523 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:04:59.290534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:04:59.290545 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 8 00:04:59.290555 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:59.290565 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:04:59.290577 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:04:59.290586 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:04:59.290596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:04:59.290606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:59.290618 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:04:59.290628 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:04:59.290637 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:04:59.290647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:04:59.290657 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:04:59.290667 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:59.290677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:59.290689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:59.290699 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:04:59.290710 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:04:59.290720 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:04:59.290730 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:04:59.290739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:04:59.290752 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:04:59.290762 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:04:59.290772 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:04:59.290783 systemd[1]: Reached target machines.target - Containers. Nov 8 00:04:59.290792 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:04:59.290803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:04:59.290812 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:04:59.290822 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:04:59.290834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:04:59.290844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:04:59.290854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:04:59.290864 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:04:59.290874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:04:59.290884 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:04:59.290894 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:04:59.290904 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:04:59.290914 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:04:59.290926 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:04:59.290945 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:04:59.290956 kernel: fuse: init (API version 7.39) Nov 8 00:04:59.290965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:04:59.290975 kernel: loop: module loaded Nov 8 00:04:59.290985 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:04:59.290996 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:04:59.291006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:04:59.291016 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:04:59.291028 systemd[1]: Stopped verity-setup.service. Nov 8 00:04:59.291054 systemd-journald[1267]: Collecting audit messages is disabled. Nov 8 00:04:59.291075 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:04:59.291085 systemd-journald[1267]: Journal started Nov 8 00:04:59.291108 systemd-journald[1267]: Runtime Journal (/run/log/journal/b0541560a87944f19d91996d2eed7fc8) is 8.0M, max 78.5M, 70.5M free. Nov 8 00:04:57.787301 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:04:58.292403 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:04:58.292792 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:04:58.293139 systemd[1]: systemd-journald.service: Consumed 2.615s CPU time. Nov 8 00:04:59.309549 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:04:59.310321 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:04:59.315526 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:04:59.319764 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:04:59.324872 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:04:59.331641 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:04:59.336495 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:04:59.342364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:59.349313 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:04:59.349465 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:04:59.357991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:04:59.358146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:04:59.359958 kernel: ACPI: bus type drm_connector registered Nov 8 00:04:59.365219 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:04:59.365366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:04:59.370980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:04:59.371122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:04:59.377502 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:04:59.377641 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:04:59.383351 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:04:59.383489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:04:59.388912 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:04:59.395196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:59.400518 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:04:59.419663 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:04:59.430127 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:04:59.437117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:04:59.442319 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:04:59.442363 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:04:59.447900 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:04:59.454421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:04:59.461057 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:04:59.465624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:04:59.494095 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:04:59.499895 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:04:59.505432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:04:59.506523 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:04:59.511413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:04:59.513153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:04:59.521164 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:04:59.530771 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:04:59.536792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:59.542782 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:04:59.548477 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:04:59.555723 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:04:59.569128 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:04:59.593688 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:04:59.599856 udevadm[1329]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:04:59.601119 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:04:59.615350 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:04:59.649744 systemd-journald[1267]: Time spent on flushing to /var/log/journal/b0541560a87944f19d91996d2eed7fc8 is 15.418ms for 900 entries. Nov 8 00:04:59.649744 systemd-journald[1267]: System Journal (/var/log/journal/b0541560a87944f19d91996d2eed7fc8) is 8.0M, max 2.6G, 2.6G free. Nov 8 00:04:59.726223 systemd-journald[1267]: Received client request to flush runtime journal. Nov 8 00:04:59.726289 kernel: loop0: detected capacity change from 0 to 31320 Nov 8 00:04:59.728955 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:04:59.748838 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:04:59.749567 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:04:59.854983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:05:00.628969 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:05:00.794960 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:05:00.804178 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:05:00.964081 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:05:00.987953 kernel: loop1: detected capacity change from 0 to 114432 Nov 8 00:05:01.188219 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Nov 8 00:05:01.188237 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Nov 8 00:05:01.192973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:05:01.205154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:05:01.226841 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Nov 8 00:05:01.904952 kernel: loop2: detected capacity change from 0 to 200800 Nov 8 00:05:01.963958 kernel: loop3: detected capacity change from 0 to 114328 Nov 8 00:05:03.130412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:05:03.149239 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:05:03.184967 kernel: loop4: detected capacity change from 0 to 31320 Nov 8 00:05:03.204103 kernel: loop5: detected capacity change from 0 to 114432 Nov 8 00:05:03.220010 kernel: loop6: detected capacity change from 0 to 200800 Nov 8 00:05:03.236201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:05:03.252772 kernel: loop7: detected capacity change from 0 to 114328 Nov 8 00:05:03.259352 (sd-merge)[1371]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 8 00:05:03.259789 (sd-merge)[1371]: Merged extensions into '/usr'. Nov 8 00:05:03.261782 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 8 00:05:03.265160 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:05:03.265177 systemd[1]: Reloading... Nov 8 00:05:03.326016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:05:03.381212 zram_generator::config[1416]: No configuration found. Nov 8 00:05:03.381321 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:05:03.430363 kernel: hv_vmbus: registering driver hv_balloon Nov 8 00:05:03.430484 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 8 00:05:03.430508 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 8 00:05:03.456134 kernel: hv_vmbus: registering driver hyperv_fb Nov 8 00:05:03.456233 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 8 00:05:03.464473 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 8 00:05:03.469978 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:05:03.479529 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:05:03.591065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:03.657405 systemd[1]: Reloading finished in 391 ms. Nov 8 00:05:03.680961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1352) Nov 8 00:05:03.698727 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:05:03.708820 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:05:03.748337 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:05:03.760206 systemd[1]: Starting ensure-sysext.service... Nov 8 00:05:03.765258 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:05:03.781949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:05:03.795089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:05:03.804742 systemd[1]: Reloading requested from client PID 1508 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:05:03.804758 systemd[1]: Reloading... Nov 8 00:05:03.877955 zram_generator::config[1547]: No configuration found. Nov 8 00:05:03.980804 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:05:03.981115 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:05:03.981766 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:05:03.982469 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Nov 8 00:05:03.982585 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Nov 8 00:05:03.989949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:04.070633 systemd[1]: Reloading finished in 265 ms. Nov 8 00:05:04.098290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:05:04.098479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:05:04.115271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:05:04.126309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:05:04.139577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:05:04.146228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:05:04.155250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:05:04.159713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:05:04.160600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:05:04.166470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:05:04.166613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:05:04.172294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:05:04.172438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:05:04.178483 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:05:04.178608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:05:04.188087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:05:04.191275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:05:04.197483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:05:04.197718 systemd-tmpfiles[1510]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:05:04.197723 systemd-tmpfiles[1510]: Skipping /boot Nov 8 00:05:04.212431 systemd-tmpfiles[1510]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:05:04.214503 systemd-tmpfiles[1510]: Skipping /boot Nov 8 00:05:04.220259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:05:04.225278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:05:04.227981 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:05:04.243414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:05:04.243583 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:05:04.250459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:05:04.250606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:05:04.252237 systemd-networkd[1359]: lo: Link UP Nov 8 00:05:04.252676 systemd-networkd[1359]: lo: Gained carrier Nov 8 00:05:04.257086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:05:04.259352 systemd-networkd[1359]: Enumeration completed Nov 8 00:05:04.260436 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:05:04.260442 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:05:04.263472 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:05:04.269332 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:05:04.269488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:05:04.287320 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:05:04.323965 kernel: mlx5_core 176f:00:02.0 enP5999s1: Link up Nov 8 00:05:04.349964 kernel: hv_netvsc 00224879-e20d-0022-4879-e20d00224879 eth0: Data path switched to VF: enP5999s1 Nov 8 00:05:04.350714 systemd-networkd[1359]: enP5999s1: Link UP Nov 8 00:05:04.350808 systemd-networkd[1359]: eth0: Link UP Nov 8 00:05:04.350811 systemd-networkd[1359]: eth0: Gained carrier Nov 8 00:05:04.350826 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:05:04.354207 systemd-networkd[1359]: enP5999s1: Gained carrier Nov 8 00:05:04.362999 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:05:04.504222 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:05:04.509868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:05:04.511257 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:05:04.518388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:05:04.525227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:05:04.531209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:05:04.538199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:05:04.545226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:05:04.551054 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:05:04.558391 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:05:04.565482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:05:04.570683 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:05:04.577422 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:05:04.585000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:05:04.585167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:05:04.599355 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:05:04.599501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:05:04.608480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:05:04.608632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:05:04.615487 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:05:04.615646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:05:04.625669 systemd[1]: Finished ensure-sysext.service. Nov 8 00:05:04.641744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:05:04.641911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:05:04.644222 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:05:04.679959 lvm[1634]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:05:04.699165 systemd-resolved[1641]: Positive Trust Anchors: Nov 8 00:05:04.699602 systemd-resolved[1641]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:05:04.699678 systemd-resolved[1641]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:05:04.760611 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:05:04.767498 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:05:04.777139 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:05:04.790229 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:05:04.814783 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:05:05.332214 augenrules[1661]: No rules Nov 8 00:05:05.333843 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:05:05.466474 systemd-resolved[1641]: Using system hostname 'ci-4081.3.6-n-5561f33395'. Nov 8 00:05:05.468331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:05:05.473585 systemd[1]: Reached target network.target - Network. Nov 8 00:05:05.477437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:05:05.709268 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:05:05.936198 systemd-networkd[1359]: eth0: Gained IPv6LL Nov 8 00:05:05.942778 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:05:05.948868 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:05:07.466672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:05:10.353489 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:05:10.360223 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:05:18.245670 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:05:18.257493 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:05:18.267251 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:05:18.308411 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:05:18.313517 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:05:18.318123 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:05:18.323678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:05:18.329247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:05:18.333989 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:05:18.340093 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:05:18.345786 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:05:18.345838 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:05:18.349681 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:05:18.388020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:05:18.394373 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:05:18.447862 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:05:18.453195 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:05:18.458297 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:05:18.462829 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:05:18.467145 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:05:18.467174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:05:18.506044 systemd[1]: Starting chronyd.service - NTP client/server... Nov 8 00:05:18.513132 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:05:18.528651 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:05:18.533832 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 8 00:05:18.537249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:05:18.542634 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:05:18.548466 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:05:18.552897 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:05:18.552961 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 8 00:05:18.555273 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 8 00:05:18.560459 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 8 00:05:18.562097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:18.562332 KVP[1686]: KVP starting; pid is:1686 Nov 8 00:05:18.574767 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:05:18.581400 kernel: hv_utils: KVP IC version 4.0 Nov 8 00:05:18.581501 jq[1684]: false Nov 8 00:05:18.577402 KVP[1686]: KVP LIC Version: 3.1 Nov 8 00:05:18.583877 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:05:18.597546 chronyd[1692]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 8 00:05:18.600054 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:05:18.609132 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:05:18.623091 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:05:18.631528 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:05:18.641644 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:05:18.642199 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:05:18.649187 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:05:18.659183 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:05:18.670419 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:05:18.671038 jq[1702]: true Nov 8 00:05:18.670613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:05:18.680499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:05:18.680699 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:05:18.704107 jq[1708]: true Nov 8 00:05:18.715822 (ntainerd)[1717]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:05:18.722459 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:05:18.723030 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:05:18.739811 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:05:18.740676 extend-filesystems[1685]: Found loop4 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found loop5 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found loop6 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found loop7 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda1 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda2 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda3 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found usr Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda4 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda6 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda7 Nov 8 00:05:18.747531 extend-filesystems[1685]: Found sda9 Nov 8 00:05:18.747531 extend-filesystems[1685]: Checking size of /dev/sda9 Nov 8 00:05:18.741254 systemd-logind[1698]: New seat seat0. Nov 8 00:05:18.740983 chronyd[1692]: Timezone right/UTC failed leap second check, ignoring Nov 8 00:05:18.844329 update_engine[1699]: I20251108 00:05:18.809639 1699 main.cc:92] Flatcar Update Engine starting Nov 8 00:05:18.745974 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:05:18.745465 chronyd[1692]: Loaded seccomp filter (level 2) Nov 8 00:05:18.849202 tar[1706]: linux-arm64/LICENSE Nov 8 00:05:18.849202 tar[1706]: linux-arm64/helm Nov 8 00:05:18.755593 systemd[1]: Started chronyd.service - NTP client/server. Nov 8 00:05:18.769570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:05:18.908195 extend-filesystems[1685]: Old size kept for /dev/sda9 Nov 8 00:05:18.908195 extend-filesystems[1685]: Found sr0 Nov 8 00:05:18.907196 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:05:18.907378 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:05:18.932973 bash[1753]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:05:18.935048 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:05:18.948526 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:05:19.040167 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1746) Nov 8 00:05:19.199681 dbus-daemon[1681]: [system] SELinux support is enabled Nov 8 00:05:19.199895 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:05:19.210096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:05:19.210131 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:05:19.217905 update_engine[1699]: I20251108 00:05:19.217733 1699 update_check_scheduler.cc:74] Next update check in 7m42s Nov 8 00:05:19.219161 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:05:19.219187 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:05:19.227950 dbus-daemon[1681]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:05:19.228189 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:05:19.245014 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:05:19.316729 coreos-metadata[1680]: Nov 08 00:05:19.316 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:05:19.319679 coreos-metadata[1680]: Nov 08 00:05:19.319 INFO Fetch successful Nov 8 00:05:19.319679 coreos-metadata[1680]: Nov 08 00:05:19.319 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 8 00:05:19.324487 coreos-metadata[1680]: Nov 08 00:05:19.324 INFO Fetch successful Nov 8 00:05:19.326584 coreos-metadata[1680]: Nov 08 00:05:19.326 INFO Fetching http://168.63.129.16/machine/ea1f0547-24b0-4985-9107-086360d1c14e/cf24d341%2D41e5%2D4ee1%2Db43f%2Da3203916d845.%5Fci%2D4081.3.6%2Dn%2D5561f33395?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 8 00:05:19.328418 coreos-metadata[1680]: Nov 08 00:05:19.328 INFO Fetch successful Nov 8 00:05:19.328418 coreos-metadata[1680]: Nov 08 00:05:19.328 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:05:19.340842 coreos-metadata[1680]: Nov 08 00:05:19.340 INFO Fetch successful Nov 8 00:05:19.373831 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:05:19.382426 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:05:19.591184 tar[1706]: linux-arm64/README.md Nov 8 00:05:19.605562 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:05:19.611335 locksmithd[1789]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:05:19.684319 sshd_keygen[1724]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:05:19.708011 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:05:19.720409 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:05:19.732148 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 8 00:05:19.743705 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:05:19.743909 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:05:19.768239 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:05:19.774625 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 8 00:05:19.781411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:19.794335 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:05:19.794608 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:05:19.809723 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:05:19.823311 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 8 00:05:19.830240 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:05:19.984126 containerd[1717]: time="2025-11-08T00:05:19.982355840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:05:20.023206 containerd[1717]: time="2025-11-08T00:05:20.023153760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.025874 containerd[1717]: time="2025-11-08T00:05:20.025827520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:05:20.025874 containerd[1717]: time="2025-11-08T00:05:20.025867840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:05:20.026003 containerd[1717]: time="2025-11-08T00:05:20.025887480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026084200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026109320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026176200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026190240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026343840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026357800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026370320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026379760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026445920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026650 containerd[1717]: time="2025-11-08T00:05:20.026624920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026923 containerd[1717]: time="2025-11-08T00:05:20.026715960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:05:20.026923 containerd[1717]: time="2025-11-08T00:05:20.026729120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:05:20.026923 containerd[1717]: time="2025-11-08T00:05:20.026797960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:05:20.026923 containerd[1717]: time="2025-11-08T00:05:20.026837320Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:05:20.042702 containerd[1717]: time="2025-11-08T00:05:20.042647760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:05:20.042833 containerd[1717]: time="2025-11-08T00:05:20.042727040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:05:20.042833 containerd[1717]: time="2025-11-08T00:05:20.042745440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:05:20.042833 containerd[1717]: time="2025-11-08T00:05:20.042765520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:05:20.042833 containerd[1717]: time="2025-11-08T00:05:20.042785680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.042978640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043214280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043311760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043330640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043343640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043356920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043369440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043382960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043396840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043411040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043423960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043435760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043449000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:05:20.043617 containerd[1717]: time="2025-11-08T00:05:20.043468880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043483400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043495880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043509200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043523720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043537440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043549600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043565120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043578520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043593480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043605200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043616800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043629280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043648600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043668920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.043901 containerd[1717]: time="2025-11-08T00:05:20.043681480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043696160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043746480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043765280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043776040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043788200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043800840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043817120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043826760Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:05:20.044187 containerd[1717]: time="2025-11-08T00:05:20.043837000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:05:20.044365 containerd[1717]: time="2025-11-08T00:05:20.044128080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:05:20.044365 containerd[1717]: time="2025-11-08T00:05:20.044187200Z" level=info msg="Connect containerd service" Nov 8 00:05:20.044365 containerd[1717]: time="2025-11-08T00:05:20.044228280Z" level=info msg="using legacy CRI server" Nov 8 00:05:20.044365 containerd[1717]: time="2025-11-08T00:05:20.044237000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:05:20.044365 containerd[1717]: time="2025-11-08T00:05:20.044321120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:05:20.045017 containerd[1717]: time="2025-11-08T00:05:20.044901920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:05:20.045950 containerd[1717]: time="2025-11-08T00:05:20.045101680Z" level=info msg="Start subscribing containerd event" Nov 8 00:05:20.045950 containerd[1717]: time="2025-11-08T00:05:20.045443280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:05:20.045950 containerd[1717]: time="2025-11-08T00:05:20.045560520Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:05:20.048372 containerd[1717]: time="2025-11-08T00:05:20.047404280Z" level=info msg="Start recovering state" Nov 8 00:05:20.048372 containerd[1717]: time="2025-11-08T00:05:20.047518200Z" level=info msg="Start event monitor" Nov 8 00:05:20.048372 containerd[1717]: time="2025-11-08T00:05:20.047533920Z" level=info msg="Start snapshots syncer" Nov 8 00:05:20.048372 containerd[1717]: time="2025-11-08T00:05:20.047549960Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:05:20.048372 containerd[1717]: time="2025-11-08T00:05:20.047560600Z" level=info msg="Start streaming server" Nov 8 00:05:20.052952 containerd[1717]: time="2025-11-08T00:05:20.048725920Z" level=info msg="containerd successfully booted in 0.067190s" Nov 8 00:05:20.049160 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:05:20.055351 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:05:20.062624 systemd[1]: Startup finished in 628ms (kernel) + 17.857s (initrd) + 33.111s (userspace) = 51.597s. Nov 8 00:05:20.192455 kubelet[1831]: E1108 00:05:20.192382 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:05:20.195122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:05:20.195398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:05:21.422107 login[1837]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 8 00:05:21.422757 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:21.430834 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:05:21.435202 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:05:21.438669 systemd-logind[1698]: New session 1 of user core. Nov 8 00:05:21.476990 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:05:21.483216 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:05:21.519787 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:05:21.960246 systemd[1854]: Queued start job for default target default.target. Nov 8 00:05:21.972851 systemd[1854]: Created slice app.slice - User Application Slice. Nov 8 00:05:21.973055 systemd[1854]: Reached target paths.target - Paths. Nov 8 00:05:21.973131 systemd[1854]: Reached target timers.target - Timers. Nov 8 00:05:21.974471 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:05:21.985713 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:05:21.985844 systemd[1854]: Reached target sockets.target - Sockets. Nov 8 00:05:21.985858 systemd[1854]: Reached target basic.target - Basic System. Nov 8 00:05:21.985904 systemd[1854]: Reached target default.target - Main User Target. Nov 8 00:05:21.985930 systemd[1854]: Startup finished in 459ms. Nov 8 00:05:21.986137 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:05:21.992150 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:05:22.423646 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:22.427995 systemd-logind[1698]: New session 2 of user core. Nov 8 00:05:22.435122 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:05:23.008855 waagent[1830]: 2025-11-08T00:05:23.008699Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 8 00:05:23.013357 waagent[1830]: 2025-11-08T00:05:23.013278Z INFO Daemon Daemon OS: flatcar 4081.3.6 Nov 8 00:05:23.016832 waagent[1830]: 2025-11-08T00:05:23.016779Z INFO Daemon Daemon Python: 3.11.9 Nov 8 00:05:23.020260 waagent[1830]: 2025-11-08T00:05:23.020200Z INFO Daemon Daemon Run daemon Nov 8 00:05:23.023571 waagent[1830]: 2025-11-08T00:05:23.023518Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Nov 8 00:05:23.030426 waagent[1830]: 2025-11-08T00:05:23.030365Z INFO Daemon Daemon Using waagent for provisioning Nov 8 00:05:23.034569 waagent[1830]: 2025-11-08T00:05:23.034519Z INFO Daemon Daemon Activate resource disk Nov 8 00:05:23.038363 waagent[1830]: 2025-11-08T00:05:23.038311Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 8 00:05:23.047488 waagent[1830]: 2025-11-08T00:05:23.047423Z INFO Daemon Daemon Found device: None Nov 8 00:05:23.051007 waagent[1830]: 2025-11-08T00:05:23.050957Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 8 00:05:23.057993 waagent[1830]: 2025-11-08T00:05:23.057944Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 8 00:05:23.068278 waagent[1830]: 2025-11-08T00:05:23.068216Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:05:23.073070 waagent[1830]: 2025-11-08T00:05:23.073023Z INFO Daemon Daemon Running default provisioning handler Nov 8 00:05:23.087514 waagent[1830]: 2025-11-08T00:05:23.087424Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 8 00:05:23.098132 waagent[1830]: 2025-11-08T00:05:23.098067Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 8 00:05:23.105823 waagent[1830]: 2025-11-08T00:05:23.105763Z INFO Daemon Daemon cloud-init is enabled: False Nov 8 00:05:23.109651 waagent[1830]: 2025-11-08T00:05:23.109603Z INFO Daemon Daemon Copying ovf-env.xml Nov 8 00:05:23.292750 waagent[1830]: 2025-11-08T00:05:23.292600Z INFO Daemon Daemon Successfully mounted dvd Nov 8 00:05:23.339562 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 8 00:05:23.341248 waagent[1830]: 2025-11-08T00:05:23.341166Z INFO Daemon Daemon Detect protocol endpoint Nov 8 00:05:23.345041 waagent[1830]: 2025-11-08T00:05:23.344992Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:05:23.349523 waagent[1830]: 2025-11-08T00:05:23.349473Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 8 00:05:23.354638 waagent[1830]: 2025-11-08T00:05:23.354590Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 8 00:05:23.358752 waagent[1830]: 2025-11-08T00:05:23.358706Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 8 00:05:23.362602 waagent[1830]: 2025-11-08T00:05:23.362559Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 8 00:05:23.438708 waagent[1830]: 2025-11-08T00:05:23.438638Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 8 00:05:23.444295 waagent[1830]: 2025-11-08T00:05:23.444266Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 8 00:05:23.453966 waagent[1830]: 2025-11-08T00:05:23.448684Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 8 00:05:23.660977 waagent[1830]: 2025-11-08T00:05:23.660300Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 8 00:05:23.665690 waagent[1830]: 2025-11-08T00:05:23.665626Z INFO Daemon Daemon Forcing an update of the goal state. Nov 8 00:05:23.674025 waagent[1830]: 2025-11-08T00:05:23.673967Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:05:23.718617 waagent[1830]: 2025-11-08T00:05:23.718568Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 8 00:05:23.723632 waagent[1830]: 2025-11-08T00:05:23.723578Z INFO Daemon Nov 8 00:05:23.725865 waagent[1830]: 2025-11-08T00:05:23.725813Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2cdace0b-63d1-45cf-b062-9a442c7f1440 eTag: 788691208721385988 source: Fabric] Nov 8 00:05:23.736382 waagent[1830]: 2025-11-08T00:05:23.735658Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 8 00:05:23.741472 waagent[1830]: 2025-11-08T00:05:23.741420Z INFO Daemon Nov 8 00:05:23.743780 waagent[1830]: 2025-11-08T00:05:23.743738Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:05:23.753694 waagent[1830]: 2025-11-08T00:05:23.753655Z INFO Daemon Daemon Downloading artifacts profile blob Nov 8 00:05:23.834504 waagent[1830]: 2025-11-08T00:05:23.834411Z INFO Daemon Downloaded certificate {'thumbprint': 'FAAF714F939A47D00D41E47D6F6006F79A4774BE', 'hasPrivateKey': True} Nov 8 00:05:23.843988 waagent[1830]: 2025-11-08T00:05:23.843920Z INFO Daemon Fetch goal state completed Nov 8 00:05:23.856135 waagent[1830]: 2025-11-08T00:05:23.855971Z INFO Daemon Daemon Starting provisioning Nov 8 00:05:23.861126 waagent[1830]: 2025-11-08T00:05:23.861063Z INFO Daemon Daemon Handle ovf-env.xml. Nov 8 00:05:23.864808 waagent[1830]: 2025-11-08T00:05:23.864762Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-5561f33395] Nov 8 00:05:24.287518 waagent[1830]: 2025-11-08T00:05:24.287435Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-5561f33395] Nov 8 00:05:24.292925 waagent[1830]: 2025-11-08T00:05:24.292848Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 8 00:05:24.298436 waagent[1830]: 2025-11-08T00:05:24.298344Z INFO Daemon Daemon Primary interface is [eth0] Nov 8 00:05:24.408127 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:05:24.408134 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:05:24.408163 systemd-networkd[1359]: eth0: DHCP lease lost Nov 8 00:05:24.409507 waagent[1830]: 2025-11-08T00:05:24.409412Z INFO Daemon Daemon Create user account if not exists Nov 8 00:05:24.414196 systemd-networkd[1359]: eth0: DHCPv6 lease lost Nov 8 00:05:24.414632 waagent[1830]: 2025-11-08T00:05:24.414303Z INFO Daemon Daemon User core already exists, skip useradd Nov 8 00:05:24.419155 waagent[1830]: 2025-11-08T00:05:24.419066Z INFO Daemon Daemon Configure sudoer Nov 8 00:05:24.423311 waagent[1830]: 2025-11-08T00:05:24.423051Z INFO Daemon Daemon Configure sshd Nov 8 00:05:24.427959 waagent[1830]: 2025-11-08T00:05:24.426771Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 8 00:05:24.437262 waagent[1830]: 2025-11-08T00:05:24.436533Z INFO Daemon Daemon Deploy ssh public key. Nov 8 00:05:24.442009 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:05:25.792153 waagent[1830]: 2025-11-08T00:05:25.790268Z INFO Daemon Daemon Provisioning complete Nov 8 00:05:25.807182 waagent[1830]: 2025-11-08T00:05:25.807131Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 8 00:05:25.812527 waagent[1830]: 2025-11-08T00:05:25.812456Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 8 00:05:25.820415 waagent[1830]: 2025-11-08T00:05:25.820352Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 8 00:05:25.959808 waagent[1906]: 2025-11-08T00:05:25.959111Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 8 00:05:25.959808 waagent[1906]: 2025-11-08T00:05:25.959275Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Nov 8 00:05:25.959808 waagent[1906]: 2025-11-08T00:05:25.959329Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 8 00:05:26.120974 waagent[1906]: 2025-11-08T00:05:26.120618Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 8 00:05:26.120974 waagent[1906]: 2025-11-08T00:05:26.120892Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:05:26.121129 waagent[1906]: 2025-11-08T00:05:26.120991Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:05:26.133013 waagent[1906]: 2025-11-08T00:05:26.132902Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:05:26.140184 waagent[1906]: 2025-11-08T00:05:26.140130Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 8 00:05:26.140753 waagent[1906]: 2025-11-08T00:05:26.140706Z INFO ExtHandler Nov 8 00:05:26.140827 waagent[1906]: 2025-11-08T00:05:26.140797Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0d1d2881-6005-4175-abc1-4146102e9070 eTag: 788691208721385988 source: Fabric] Nov 8 00:05:26.141156 waagent[1906]: 2025-11-08T00:05:26.141113Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 8 00:05:26.141761 waagent[1906]: 2025-11-08T00:05:26.141715Z INFO ExtHandler Nov 8 00:05:26.141825 waagent[1906]: 2025-11-08T00:05:26.141797Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:05:26.146446 waagent[1906]: 2025-11-08T00:05:26.146401Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 8 00:05:26.218617 waagent[1906]: 2025-11-08T00:05:26.218512Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FAAF714F939A47D00D41E47D6F6006F79A4774BE', 'hasPrivateKey': True} Nov 8 00:05:26.219227 waagent[1906]: 2025-11-08T00:05:26.219176Z INFO ExtHandler Fetch goal state completed Nov 8 00:05:26.235237 waagent[1906]: 2025-11-08T00:05:26.235168Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1906 Nov 8 00:05:26.235406 waagent[1906]: 2025-11-08T00:05:26.235369Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 8 00:05:26.237194 waagent[1906]: 2025-11-08T00:05:26.237137Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Nov 8 00:05:26.237580 waagent[1906]: 2025-11-08T00:05:26.237541Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 8 00:05:26.336955 waagent[1906]: 2025-11-08T00:05:26.336901Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 8 00:05:26.337170 waagent[1906]: 2025-11-08T00:05:26.337129Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 8 00:05:26.343854 waagent[1906]: 2025-11-08T00:05:26.343312Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 8 00:05:26.350774 systemd[1]: Reloading requested from client PID 1919 ('systemctl') (unit waagent.service)... Nov 8 00:05:26.350853 systemd[1]: Reloading... Nov 8 00:05:26.457967 zram_generator::config[1964]: No configuration found. Nov 8 00:05:26.553672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:26.632711 systemd[1]: Reloading finished in 281 ms. Nov 8 00:05:26.652679 waagent[1906]: 2025-11-08T00:05:26.652293Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 8 00:05:26.658618 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Nov 8 00:05:26.658633 systemd[1]: Reloading... Nov 8 00:05:26.738963 zram_generator::config[2043]: No configuration found. Nov 8 00:05:26.848413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:26.924095 systemd[1]: Reloading finished in 265 ms. Nov 8 00:05:26.943886 waagent[1906]: 2025-11-08T00:05:26.943102Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 8 00:05:26.943886 waagent[1906]: 2025-11-08T00:05:26.943266Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 8 00:05:28.215982 waagent[1906]: 2025-11-08T00:05:28.215336Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 8 00:05:28.216297 waagent[1906]: 2025-11-08T00:05:28.215981Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 8 00:05:28.216878 waagent[1906]: 2025-11-08T00:05:28.216814Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 8 00:05:28.217339 waagent[1906]: 2025-11-08T00:05:28.217226Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 8 00:05:28.218595 waagent[1906]: 2025-11-08T00:05:28.217683Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:05:28.218595 waagent[1906]: 2025-11-08T00:05:28.217783Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:05:28.218595 waagent[1906]: 2025-11-08T00:05:28.217925Z INFO EnvHandler ExtHandler Configure routes Nov 8 00:05:28.218595 waagent[1906]: 2025-11-08T00:05:28.218024Z INFO EnvHandler ExtHandler Gateway:None Nov 8 00:05:28.218595 waagent[1906]: 2025-11-08T00:05:28.218070Z INFO EnvHandler ExtHandler Routes:None Nov 8 00:05:28.218903 waagent[1906]: 2025-11-08T00:05:28.218844Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 8 00:05:28.219163 waagent[1906]: 2025-11-08T00:05:28.219124Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:05:28.219310 waagent[1906]: 2025-11-08T00:05:28.219276Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:05:28.219629 waagent[1906]: 2025-11-08T00:05:28.219583Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 8 00:05:28.219919 waagent[1906]: 2025-11-08T00:05:28.219872Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 8 00:05:28.219919 waagent[1906]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 8 00:05:28.219919 waagent[1906]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 8 00:05:28.219919 waagent[1906]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 8 00:05:28.219919 waagent[1906]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:05:28.219919 waagent[1906]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:05:28.219919 waagent[1906]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:05:28.220244 waagent[1906]: 2025-11-08T00:05:28.220168Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 8 00:05:28.221242 waagent[1906]: 2025-11-08T00:05:28.221196Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 8 00:05:28.221722 waagent[1906]: 2025-11-08T00:05:28.221133Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 8 00:05:28.221846 waagent[1906]: 2025-11-08T00:05:28.221802Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 8 00:05:28.233003 waagent[1906]: 2025-11-08T00:05:28.232929Z INFO ExtHandler ExtHandler Nov 8 00:05:28.233123 waagent[1906]: 2025-11-08T00:05:28.233085Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d05c5db9-1f32-48b2-9a7d-00ca7cc8b15b correlation 9c53fedc-5c40-4301-aec8-92f3692eac65 created: 2025-11-08T00:03:04.329384Z] Nov 8 00:05:28.233545 waagent[1906]: 2025-11-08T00:05:28.233494Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 8 00:05:28.234156 waagent[1906]: 2025-11-08T00:05:28.234113Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 8 00:05:28.293220 waagent[1906]: 2025-11-08T00:05:28.293147Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 23387094-9CC5-4B45-89A5-A9972E27D644;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 8 00:05:28.424974 waagent[1906]: 2025-11-08T00:05:28.424462Z INFO MonitorHandler ExtHandler Network interfaces: Nov 8 00:05:28.424974 waagent[1906]: Executing ['ip', '-a', '-o', 'link']: Nov 8 00:05:28.424974 waagent[1906]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 8 00:05:28.424974 waagent[1906]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:e2:0d brd ff:ff:ff:ff:ff:ff Nov 8 00:05:28.424974 waagent[1906]: 3: enP5999s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:e2:0d brd ff:ff:ff:ff:ff:ff\ altname enP5999p0s2 Nov 8 00:05:28.424974 waagent[1906]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 8 00:05:28.424974 waagent[1906]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 8 00:05:28.424974 waagent[1906]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 8 00:05:28.424974 waagent[1906]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 8 00:05:28.424974 waagent[1906]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 8 00:05:28.424974 waagent[1906]: 2: eth0 inet6 fe80::222:48ff:fe79:e20d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 8 00:05:28.607647 waagent[1906]: 2025-11-08T00:05:28.607562Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 8 00:05:28.607647 waagent[1906]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:05:28.607647 waagent[1906]: pkts bytes target prot opt in out source destination Nov 8 00:05:28.607647 waagent[1906]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:05:28.607647 waagent[1906]: pkts bytes target prot opt in out source destination Nov 8 00:05:28.607647 waagent[1906]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:05:28.607647 waagent[1906]: pkts bytes target prot opt in out source destination Nov 8 00:05:28.607647 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:05:28.607647 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:05:28.607647 waagent[1906]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:05:28.610720 waagent[1906]: 2025-11-08T00:05:28.610650Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 8 00:05:28.610720 waagent[1906]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:05:28.610720 waagent[1906]: pkts bytes target prot opt in out source destination Nov 8 00:05:28.610720 waagent[1906]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:05:28.610720 waagent[1906]: pkts bytes target prot opt in out source destination Nov 8 00:05:28.610720 waagent[1906]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:05:28.610720 waagent[1906]: pkts bytes target prot opt in out source destination Nov 8 00:05:28.610720 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:05:28.610720 waagent[1906]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:05:28.610720 waagent[1906]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:05:28.611011 waagent[1906]: 2025-11-08T00:05:28.610972Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 8 00:05:30.325540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:05:30.333149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:30.549076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:30.553727 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:05:30.596182 kubelet[2136]: E1108 00:05:30.596070 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:05:30.598860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:05:30.599009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:05:40.825778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:05:40.834147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:40.934354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:40.939125 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:05:41.070708 kubelet[2151]: E1108 00:05:41.070634 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:05:41.074148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:05:41.074436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:05:42.527473 chronyd[1692]: Selected source PHC0 Nov 8 00:05:51.075630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:05:51.084159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:51.195445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:51.204217 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:05:51.221927 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:05:51.228421 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:51652.service - OpenSSH per-connection server daemon (10.200.16.10:51652). Nov 8 00:05:51.284640 kubelet[2166]: E1108 00:05:51.284566 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:05:51.287621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:05:51.287899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:05:51.552417 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 8 00:05:52.127835 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 51652 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:52.129241 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:52.133851 systemd-logind[1698]: New session 3 of user core. Nov 8 00:05:52.140139 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:05:52.528781 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:51664.service - OpenSSH per-connection server daemon (10.200.16.10:51664). Nov 8 00:05:52.978772 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 51664 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:52.980198 sshd[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:52.984315 systemd-logind[1698]: New session 4 of user core. Nov 8 00:05:52.991227 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:05:53.317597 sshd[2179]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:53.321263 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:51664.service: Deactivated successfully. Nov 8 00:05:53.322815 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:05:53.324837 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:05:53.326435 systemd-logind[1698]: Removed session 4. Nov 8 00:05:53.403227 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:51672.service - OpenSSH per-connection server daemon (10.200.16.10:51672). Nov 8 00:05:53.848597 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 51672 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:53.850000 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:53.853882 systemd-logind[1698]: New session 5 of user core. Nov 8 00:05:53.861169 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:05:54.194862 sshd[2186]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:54.198831 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:51672.service: Deactivated successfully. Nov 8 00:05:54.200544 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:05:54.202583 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:05:54.203679 systemd-logind[1698]: Removed session 5. Nov 8 00:05:54.282192 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:51686.service - OpenSSH per-connection server daemon (10.200.16.10:51686). Nov 8 00:05:54.733828 sshd[2193]: Accepted publickey for core from 10.200.16.10 port 51686 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:54.735255 sshd[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:54.740373 systemd-logind[1698]: New session 6 of user core. Nov 8 00:05:54.747152 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:05:55.074309 sshd[2193]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:55.077952 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:51686.service: Deactivated successfully. Nov 8 00:05:55.079672 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:05:55.081812 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:05:55.083208 systemd-logind[1698]: Removed session 6. Nov 8 00:05:55.155959 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:51692.service - OpenSSH per-connection server daemon (10.200.16.10:51692). Nov 8 00:05:55.609012 sshd[2200]: Accepted publickey for core from 10.200.16.10 port 51692 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:55.610356 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:55.615405 systemd-logind[1698]: New session 7 of user core. Nov 8 00:05:55.621148 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:05:55.916748 sudo[2203]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:05:55.917052 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:05:55.934912 sudo[2203]: pam_unix(sudo:session): session closed for user root Nov 8 00:05:56.018896 sshd[2200]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:56.022714 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:05:56.023496 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:51692.service: Deactivated successfully. Nov 8 00:05:56.026225 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:05:56.029028 systemd-logind[1698]: Removed session 7. Nov 8 00:05:56.106179 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:51698.service - OpenSSH per-connection server daemon (10.200.16.10:51698). Nov 8 00:05:56.558714 sshd[2208]: Accepted publickey for core from 10.200.16.10 port 51698 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:56.560435 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:56.565183 systemd-logind[1698]: New session 8 of user core. Nov 8 00:05:56.570148 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:05:56.816539 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:05:56.816841 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:05:56.830169 sudo[2212]: pam_unix(sudo:session): session closed for user root Nov 8 00:05:56.835430 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:05:56.835715 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:05:56.852377 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:05:56.853579 auditctl[2215]: No rules Nov 8 00:05:56.854073 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:05:56.854284 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:05:56.856891 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:05:56.891567 augenrules[2233]: No rules Nov 8 00:05:56.893069 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:05:56.896664 sudo[2211]: pam_unix(sudo:session): session closed for user root Nov 8 00:05:56.980514 sshd[2208]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:56.983491 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:51698.service: Deactivated successfully. Nov 8 00:05:56.985417 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:05:56.987070 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:05:56.988541 systemd-logind[1698]: Removed session 8. Nov 8 00:05:57.080013 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:51708.service - OpenSSH per-connection server daemon (10.200.16.10:51708). Nov 8 00:05:57.529263 sshd[2241]: Accepted publickey for core from 10.200.16.10 port 51708 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:05:57.530596 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:57.534971 systemd-logind[1698]: New session 9 of user core. Nov 8 00:05:57.543143 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:05:57.786909 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:05:57.787251 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:05:58.159234 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:05:58.159368 (dockerd)[2259]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:05:58.427565 dockerd[2259]: time="2025-11-08T00:05:58.427433839Z" level=info msg="Starting up" Nov 8 00:05:58.549438 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1867394593-merged.mount: Deactivated successfully. Nov 8 00:05:58.619035 dockerd[2259]: time="2025-11-08T00:05:58.618785351Z" level=info msg="Loading containers: start." Nov 8 00:05:58.736971 kernel: Initializing XFRM netlink socket Nov 8 00:05:58.810977 systemd-networkd[1359]: docker0: Link UP Nov 8 00:05:58.841311 dockerd[2259]: time="2025-11-08T00:05:58.841268429Z" level=info msg="Loading containers: done." Nov 8 00:05:58.870344 dockerd[2259]: time="2025-11-08T00:05:58.870291094Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:05:58.870506 dockerd[2259]: time="2025-11-08T00:05:58.870409374Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:05:58.870554 dockerd[2259]: time="2025-11-08T00:05:58.870529533Z" level=info msg="Daemon has completed initialization" Nov 8 00:05:58.934718 dockerd[2259]: time="2025-11-08T00:05:58.934645325Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:05:58.935190 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:05:59.546883 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2707726437-merged.mount: Deactivated successfully. Nov 8 00:05:59.568797 containerd[1717]: time="2025-11-08T00:05:59.568759225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:06:00.666880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564515949.mount: Deactivated successfully. Nov 8 00:06:01.325589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:06:01.331154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:06:01.734483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:01.739176 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:06:01.779168 kubelet[2448]: E1108 00:06:01.779096 2448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:06:01.782027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:06:01.782321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:06:02.488419 containerd[1717]: time="2025-11-08T00:06:02.488362010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:02.491258 containerd[1717]: time="2025-11-08T00:06:02.491218165Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574510" Nov 8 00:06:02.495231 containerd[1717]: time="2025-11-08T00:06:02.495155917Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:02.501169 containerd[1717]: time="2025-11-08T00:06:02.501093585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:02.502141 containerd[1717]: time="2025-11-08T00:06:02.501949584Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 2.93297304s" Nov 8 00:06:02.502141 containerd[1717]: time="2025-11-08T00:06:02.501991424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 8 00:06:02.502958 containerd[1717]: time="2025-11-08T00:06:02.502762982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:06:03.673980 containerd[1717]: time="2025-11-08T00:06:03.673227165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:03.676891 containerd[1717]: time="2025-11-08T00:06:03.676649559Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132143" Nov 8 00:06:03.680304 containerd[1717]: time="2025-11-08T00:06:03.680250552Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:03.686508 containerd[1717]: time="2025-11-08T00:06:03.686443219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:03.687965 containerd[1717]: time="2025-11-08T00:06:03.687476897Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.184683395s" Nov 8 00:06:03.687965 containerd[1717]: time="2025-11-08T00:06:03.687516257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 8 00:06:03.688480 containerd[1717]: time="2025-11-08T00:06:03.688247336Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:06:04.466885 update_engine[1699]: I20251108 00:06:04.466782 1699 update_attempter.cc:509] Updating boot flags... Nov 8 00:06:04.606370 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2487) Nov 8 00:06:04.766952 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2487) Nov 8 00:06:05.311972 containerd[1717]: time="2025-11-08T00:06:05.311773030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:05.317824 containerd[1717]: time="2025-11-08T00:06:05.317759298Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191884" Nov 8 00:06:05.321778 containerd[1717]: time="2025-11-08T00:06:05.321699730Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:05.331301 containerd[1717]: time="2025-11-08T00:06:05.331245352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:05.332481 containerd[1717]: time="2025-11-08T00:06:05.332322790Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 1.644040454s" Nov 8 00:06:05.332481 containerd[1717]: time="2025-11-08T00:06:05.332368110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 8 00:06:05.333216 containerd[1717]: time="2025-11-08T00:06:05.333168548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:06:06.508009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457771563.mount: Deactivated successfully. Nov 8 00:06:06.787041 containerd[1717]: time="2025-11-08T00:06:06.786552234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:06.789662 containerd[1717]: time="2025-11-08T00:06:06.789616669Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789028" Nov 8 00:06:06.793429 containerd[1717]: time="2025-11-08T00:06:06.793378583Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:06.799105 containerd[1717]: time="2025-11-08T00:06:06.799021855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:06.799851 containerd[1717]: time="2025-11-08T00:06:06.799717414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.466456346s" Nov 8 00:06:06.799851 containerd[1717]: time="2025-11-08T00:06:06.799755174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 8 00:06:06.800604 containerd[1717]: time="2025-11-08T00:06:06.800578532Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:06:07.463702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884704479.mount: Deactivated successfully. Nov 8 00:06:08.675378 containerd[1717]: time="2025-11-08T00:06:08.675306104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:08.678732 containerd[1717]: time="2025-11-08T00:06:08.678691579Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Nov 8 00:06:08.682777 containerd[1717]: time="2025-11-08T00:06:08.682741093Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:08.691143 containerd[1717]: time="2025-11-08T00:06:08.691078520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:08.692477 containerd[1717]: time="2025-11-08T00:06:08.692326838Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.891712586s" Nov 8 00:06:08.692477 containerd[1717]: time="2025-11-08T00:06:08.692366798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 8 00:06:08.693543 containerd[1717]: time="2025-11-08T00:06:08.693456317Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:06:09.295477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302869019.mount: Deactivated successfully. Nov 8 00:06:09.329036 containerd[1717]: time="2025-11-08T00:06:09.328985624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:09.332209 containerd[1717]: time="2025-11-08T00:06:09.332172460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Nov 8 00:06:09.335916 containerd[1717]: time="2025-11-08T00:06:09.335856054Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:09.342053 containerd[1717]: time="2025-11-08T00:06:09.341988685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:09.342866 containerd[1717]: time="2025-11-08T00:06:09.342711283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 649.220806ms" Nov 8 00:06:09.342866 containerd[1717]: time="2025-11-08T00:06:09.342749723Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 8 00:06:09.343245 containerd[1717]: time="2025-11-08T00:06:09.343160963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:06:11.825505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:06:11.835179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:06:11.959392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:11.967493 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:06:12.018967 kubelet[2654]: E1108 00:06:12.017868 2654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:06:12.020772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:06:12.020921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:06:12.574981 containerd[1717]: time="2025-11-08T00:06:12.574529596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:12.577890 containerd[1717]: time="2025-11-08T00:06:12.577846950Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410766" Nov 8 00:06:12.581942 containerd[1717]: time="2025-11-08T00:06:12.581901382Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:12.587254 containerd[1717]: time="2025-11-08T00:06:12.587172692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:12.588929 containerd[1717]: time="2025-11-08T00:06:12.588438330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.245245367s" Nov 8 00:06:12.588929 containerd[1717]: time="2025-11-08T00:06:12.588485210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 8 00:06:19.301867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:19.311982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:06:19.354219 systemd[1]: Reloading requested from client PID 2690 ('systemctl') (unit session-9.scope)... Nov 8 00:06:19.354240 systemd[1]: Reloading... Nov 8 00:06:19.469961 zram_generator::config[2731]: No configuration found. Nov 8 00:06:19.577293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:06:19.656956 systemd[1]: Reloading finished in 302 ms. Nov 8 00:06:19.702175 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:06:19.702266 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:06:19.702543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:19.704326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:06:19.902778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:19.913264 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:06:20.014108 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:06:20.014108 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:06:20.015126 kubelet[2797]: I1108 00:06:20.014616 2797 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:06:20.942990 kubelet[2797]: I1108 00:06:20.942920 2797 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:06:20.943285 kubelet[2797]: I1108 00:06:20.943159 2797 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:06:20.945049 kubelet[2797]: I1108 00:06:20.944466 2797 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:06:20.945049 kubelet[2797]: I1108 00:06:20.944493 2797 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:06:20.945049 kubelet[2797]: I1108 00:06:20.944749 2797 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:06:20.954193 kubelet[2797]: E1108 00:06:20.954156 2797 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:06:20.956307 kubelet[2797]: I1108 00:06:20.956139 2797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:06:20.963145 kubelet[2797]: E1108 00:06:20.963082 2797 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:06:20.963967 kubelet[2797]: I1108 00:06:20.963359 2797 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:06:20.966324 kubelet[2797]: I1108 00:06:20.966294 2797 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:06:20.966650 kubelet[2797]: I1108 00:06:20.966619 2797 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:06:20.966882 kubelet[2797]: I1108 00:06:20.966713 2797 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-5561f33395","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:06:20.967047 kubelet[2797]: I1108 00:06:20.967033 2797 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:06:20.967102 kubelet[2797]: I1108 00:06:20.967094 2797 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:06:20.967261 kubelet[2797]: I1108 00:06:20.967247 2797 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:06:20.973537 kubelet[2797]: I1108 00:06:20.973503 2797 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:06:20.974945 kubelet[2797]: I1108 00:06:20.974912 2797 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:06:20.975063 kubelet[2797]: I1108 00:06:20.975053 2797 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:06:20.975139 kubelet[2797]: I1108 00:06:20.975130 2797 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:06:20.975202 kubelet[2797]: I1108 00:06:20.975193 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:06:20.976301 kubelet[2797]: E1108 00:06:20.976127 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:06:20.976301 kubelet[2797]: E1108 00:06:20.976218 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-5561f33395&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:06:20.976962 kubelet[2797]: I1108 00:06:20.976726 2797 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:06:20.977427 kubelet[2797]: I1108 00:06:20.977411 2797 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:06:20.977502 kubelet[2797]: I1108 00:06:20.977493 2797 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:06:20.977587 kubelet[2797]: W1108 00:06:20.977576 2797 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:06:20.981694 kubelet[2797]: I1108 00:06:20.981405 2797 server.go:1262] "Started kubelet" Nov 8 00:06:20.982308 kubelet[2797]: I1108 00:06:20.982277 2797 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:06:20.983174 kubelet[2797]: I1108 00:06:20.983146 2797 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:06:20.984334 kubelet[2797]: I1108 00:06:20.984264 2797 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:06:20.984463 kubelet[2797]: I1108 00:06:20.984450 2797 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:06:20.985196 kubelet[2797]: I1108 00:06:20.984869 2797 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:06:20.986100 kubelet[2797]: E1108 00:06:20.985042 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-5561f33395.1875df53fb925be2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-5561f33395,UID:ci-4081.3.6-n-5561f33395,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-5561f33395,},FirstTimestamp:2025-11-08 00:06:20.981369826 +0000 UTC m=+1.065079139,LastTimestamp:2025-11-08 00:06:20.981369826 +0000 UTC m=+1.065079139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-5561f33395,}" Nov 8 00:06:20.987533 kubelet[2797]: I1108 00:06:20.987507 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:06:20.989240 kubelet[2797]: I1108 00:06:20.988446 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:06:20.991590 kubelet[2797]: E1108 00:06:20.991557 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:20.991739 kubelet[2797]: I1108 00:06:20.991726 2797 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:06:20.992079 kubelet[2797]: I1108 00:06:20.992061 2797 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:06:20.992217 kubelet[2797]: I1108 00:06:20.992206 2797 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:06:20.992770 kubelet[2797]: E1108 00:06:20.992729 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:06:20.993176 kubelet[2797]: I1108 00:06:20.993155 2797 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:06:20.993421 kubelet[2797]: I1108 00:06:20.993402 2797 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:06:20.993842 kubelet[2797]: E1108 00:06:20.993821 2797 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:06:20.995194 kubelet[2797]: I1108 00:06:20.995173 2797 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:06:21.015336 kubelet[2797]: E1108 00:06:21.015249 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-5561f33395?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Nov 8 00:06:21.045804 kubelet[2797]: I1108 00:06:21.045735 2797 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:06:21.045804 kubelet[2797]: I1108 00:06:21.045755 2797 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:06:21.045804 kubelet[2797]: I1108 00:06:21.045815 2797 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:06:21.052116 kubelet[2797]: I1108 00:06:21.052072 2797 policy_none.go:49] "None policy: Start" Nov 8 00:06:21.052116 kubelet[2797]: I1108 00:06:21.052112 2797 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:06:21.052116 kubelet[2797]: I1108 00:06:21.052127 2797 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:06:21.058357 kubelet[2797]: I1108 00:06:21.058320 2797 policy_none.go:47] "Start" Nov 8 00:06:21.063930 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:06:21.077247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:06:21.083187 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:06:21.092927 kubelet[2797]: E1108 00:06:21.092862 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:21.093186 kubelet[2797]: E1108 00:06:21.093062 2797 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:06:21.093557 kubelet[2797]: I1108 00:06:21.093538 2797 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:06:21.093852 kubelet[2797]: I1108 00:06:21.093762 2797 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:06:21.094858 kubelet[2797]: I1108 00:06:21.094764 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:06:21.095657 kubelet[2797]: E1108 00:06:21.095560 2797 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:06:21.095657 kubelet[2797]: E1108 00:06:21.095611 2797 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:21.197501 kubelet[2797]: I1108 00:06:21.197389 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.197860 kubelet[2797]: E1108 00:06:21.197831 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.216467 kubelet[2797]: E1108 00:06:21.216415 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-5561f33395?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Nov 8 00:06:21.404313 kubelet[2797]: I1108 00:06:21.403846 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.404313 kubelet[2797]: E1108 00:06:21.404196 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.405208 kubelet[2797]: I1108 00:06:21.405160 2797 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:06:21.406417 kubelet[2797]: I1108 00:06:21.406357 2797 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:06:21.406417 kubelet[2797]: I1108 00:06:21.406383 2797 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:06:21.406417 kubelet[2797]: I1108 00:06:21.406420 2797 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:06:21.406544 kubelet[2797]: E1108 00:06:21.406465 2797 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 8 00:06:21.410009 kubelet[2797]: E1108 00:06:21.409524 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:06:21.595756 kubelet[2797]: I1108 00:06:21.595711 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e85a485aa0892d8fb9f782eca6c56311-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-5561f33395\" (UID: \"e85a485aa0892d8fb9f782eca6c56311\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.595756 kubelet[2797]: I1108 00:06:21.595752 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e85a485aa0892d8fb9f782eca6c56311-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-5561f33395\" (UID: \"e85a485aa0892d8fb9f782eca6c56311\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.595930 kubelet[2797]: I1108 00:06:21.595770 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e85a485aa0892d8fb9f782eca6c56311-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-5561f33395\" (UID: \"e85a485aa0892d8fb9f782eca6c56311\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.617296 kubelet[2797]: E1108 00:06:21.617249 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-5561f33395?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Nov 8 00:06:21.806720 kubelet[2797]: I1108 00:06:21.806655 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.807115 kubelet[2797]: E1108 00:06:21.807082 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.990781 systemd[1]: Created slice kubepods-burstable-pode85a485aa0892d8fb9f782eca6c56311.slice - libcontainer container kubepods-burstable-pode85a485aa0892d8fb9f782eca6c56311.slice. Nov 8 00:06:21.996740 kubelet[2797]: E1108 00:06:21.996710 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.999875 kubelet[2797]: I1108 00:06:21.999364 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd9652fe24435698a154448ce12019a6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-5561f33395\" (UID: \"cd9652fe24435698a154448ce12019a6\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.999875 kubelet[2797]: I1108 00:06:21.999411 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.999875 kubelet[2797]: I1108 00:06:21.999433 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.999875 kubelet[2797]: I1108 00:06:21.999449 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:21.999875 kubelet[2797]: I1108 00:06:21.999466 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:22.001881 kubelet[2797]: I1108 00:06:21.999483 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:22.005318 containerd[1717]: time="2025-11-08T00:06:22.005272481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-5561f33395,Uid:e85a485aa0892d8fb9f782eca6c56311,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:22.006885 systemd[1]: Created slice kubepods-burstable-podc4ce5643d9c008d42d0e7e5790365597.slice - libcontainer container kubepods-burstable-podc4ce5643d9c008d42d0e7e5790365597.slice. Nov 8 00:06:22.013549 kubelet[2797]: E1108 00:06:22.013510 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:22.016720 systemd[1]: Created slice kubepods-burstable-podcd9652fe24435698a154448ce12019a6.slice - libcontainer container kubepods-burstable-podcd9652fe24435698a154448ce12019a6.slice. Nov 8 00:06:22.019410 kubelet[2797]: E1108 00:06:22.019207 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:22.269581 kubelet[2797]: E1108 00:06:22.269432 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-5561f33395&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:06:22.320570 containerd[1717]: time="2025-11-08T00:06:22.320523215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-5561f33395,Uid:c4ce5643d9c008d42d0e7e5790365597,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:22.327801 containerd[1717]: time="2025-11-08T00:06:22.327756081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-5561f33395,Uid:cd9652fe24435698a154448ce12019a6,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:22.346694 kubelet[2797]: E1108 00:06:22.346654 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:06:22.418514 kubelet[2797]: E1108 00:06:22.418464 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-5561f33395?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Nov 8 00:06:22.426665 kubelet[2797]: E1108 00:06:22.426062 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:06:22.608907 kubelet[2797]: I1108 00:06:22.608871 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:22.609318 kubelet[2797]: E1108 00:06:22.609285 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:22.661980 kubelet[2797]: E1108 00:06:22.661826 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:06:22.716061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137685966.mount: Deactivated successfully. Nov 8 00:06:22.743989 containerd[1717]: time="2025-11-08T00:06:22.743535612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:06:22.746952 containerd[1717]: time="2025-11-08T00:06:22.746877927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 8 00:06:22.750719 containerd[1717]: time="2025-11-08T00:06:22.750654400Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:06:22.755445 containerd[1717]: time="2025-11-08T00:06:22.754252314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:06:22.757622 containerd[1717]: time="2025-11-08T00:06:22.757568708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:06:22.762772 containerd[1717]: time="2025-11-08T00:06:22.761772141Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:06:22.764476 containerd[1717]: time="2025-11-08T00:06:22.764409457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:06:22.769399 containerd[1717]: time="2025-11-08T00:06:22.769335568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:06:22.770496 containerd[1717]: time="2025-11-08T00:06:22.770219447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 449.613192ms" Nov 8 00:06:22.773148 containerd[1717]: time="2025-11-08T00:06:22.773102602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 445.262001ms" Nov 8 00:06:22.773264 containerd[1717]: time="2025-11-08T00:06:22.773190482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 767.837641ms" Nov 8 00:06:23.135927 kubelet[2797]: E1108 00:06:23.135874 2797 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:06:23.917605 kubelet[2797]: E1108 00:06:23.917552 2797 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-5561f33395&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:06:24.019753 kubelet[2797]: E1108 00:06:24.019686 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-5561f33395?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="3.2s" Nov 8 00:06:24.111689 containerd[1717]: time="2025-11-08T00:06:24.111459956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:24.111689 containerd[1717]: time="2025-11-08T00:06:24.111518156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:24.111689 containerd[1717]: time="2025-11-08T00:06:24.111534236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.114144 containerd[1717]: time="2025-11-08T00:06:24.113134073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.115092 containerd[1717]: time="2025-11-08T00:06:24.114917990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:24.115176 containerd[1717]: time="2025-11-08T00:06:24.115136190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:24.115202 containerd[1717]: time="2025-11-08T00:06:24.115175030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.115556 containerd[1717]: time="2025-11-08T00:06:24.115435629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.117323 containerd[1717]: time="2025-11-08T00:06:24.117198786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:24.117544 containerd[1717]: time="2025-11-08T00:06:24.117427826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:24.118845 containerd[1717]: time="2025-11-08T00:06:24.118618104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.118845 containerd[1717]: time="2025-11-08T00:06:24.118756664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.198479 systemd[1]: Started cri-containerd-080affb0782740bf1fa278653d35f0a3f94e872aadcda7537c795532773bd4d3.scope - libcontainer container 080affb0782740bf1fa278653d35f0a3f94e872aadcda7537c795532773bd4d3. Nov 8 00:06:24.201281 systemd[1]: Started cri-containerd-4fdef219f0160188fa23ea6e592ff541ad682ff639435d051faa2f1878ee684e.scope - libcontainer container 4fdef219f0160188fa23ea6e592ff541ad682ff639435d051faa2f1878ee684e. Nov 8 00:06:24.204734 systemd[1]: Started cri-containerd-62267ef97e43311cb90e433fbb269a592c566b2b24ca0823809f9ac24ff9f0c9.scope - libcontainer container 62267ef97e43311cb90e433fbb269a592c566b2b24ca0823809f9ac24ff9f0c9. Nov 8 00:06:24.211397 kubelet[2797]: E1108 00:06:24.210038 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-5561f33395.1875df53fb925be2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-5561f33395,UID:ci-4081.3.6-n-5561f33395,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-5561f33395,},FirstTimestamp:2025-11-08 00:06:20.981369826 +0000 UTC m=+1.065079139,LastTimestamp:2025-11-08 00:06:20.981369826 +0000 UTC m=+1.065079139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-5561f33395,}" Nov 8 00:06:24.212564 kubelet[2797]: I1108 00:06:24.212534 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:24.212909 kubelet[2797]: E1108 00:06:24.212883 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:24.257696 containerd[1717]: time="2025-11-08T00:06:24.257552667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-5561f33395,Uid:c4ce5643d9c008d42d0e7e5790365597,Namespace:kube-system,Attempt:0,} returns sandbox id \"080affb0782740bf1fa278653d35f0a3f94e872aadcda7537c795532773bd4d3\"" Nov 8 00:06:24.261816 containerd[1717]: time="2025-11-08T00:06:24.261716060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-5561f33395,Uid:e85a485aa0892d8fb9f782eca6c56311,Namespace:kube-system,Attempt:0,} returns sandbox id \"62267ef97e43311cb90e433fbb269a592c566b2b24ca0823809f9ac24ff9f0c9\"" Nov 8 00:06:24.271212 containerd[1717]: time="2025-11-08T00:06:24.271007284Z" level=info msg="CreateContainer within sandbox \"080affb0782740bf1fa278653d35f0a3f94e872aadcda7537c795532773bd4d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:06:24.278579 containerd[1717]: time="2025-11-08T00:06:24.278464871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-5561f33395,Uid:cd9652fe24435698a154448ce12019a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fdef219f0160188fa23ea6e592ff541ad682ff639435d051faa2f1878ee684e\"" Nov 8 00:06:24.289140 containerd[1717]: time="2025-11-08T00:06:24.288726453Z" level=info msg="CreateContainer within sandbox \"62267ef97e43311cb90e433fbb269a592c566b2b24ca0823809f9ac24ff9f0c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:06:24.295587 containerd[1717]: time="2025-11-08T00:06:24.295544802Z" level=info msg="CreateContainer within sandbox \"4fdef219f0160188fa23ea6e592ff541ad682ff639435d051faa2f1878ee684e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:06:24.353159 containerd[1717]: time="2025-11-08T00:06:24.353100703Z" level=info msg="CreateContainer within sandbox \"080affb0782740bf1fa278653d35f0a3f94e872aadcda7537c795532773bd4d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"627aa83c1c508eb2cdf51292dd132f8e692d879a0372ecbad508820711a700c2\"" Nov 8 00:06:24.354990 containerd[1717]: time="2025-11-08T00:06:24.353802542Z" level=info msg="StartContainer for \"627aa83c1c508eb2cdf51292dd132f8e692d879a0372ecbad508820711a700c2\"" Nov 8 00:06:24.373704 containerd[1717]: time="2025-11-08T00:06:24.373614428Z" level=info msg="CreateContainer within sandbox \"4fdef219f0160188fa23ea6e592ff541ad682ff639435d051faa2f1878ee684e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c605d99ead35b0cd2dddabed806003d58ee00017455ec818a1391057f5bdbfc\"" Nov 8 00:06:24.375028 containerd[1717]: time="2025-11-08T00:06:24.374994386Z" level=info msg="StartContainer for \"2c605d99ead35b0cd2dddabed806003d58ee00017455ec818a1391057f5bdbfc\"" Nov 8 00:06:24.375776 containerd[1717]: time="2025-11-08T00:06:24.375733225Z" level=info msg="CreateContainer within sandbox \"62267ef97e43311cb90e433fbb269a592c566b2b24ca0823809f9ac24ff9f0c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ff66c8740c0a4d67772570948107be83f0993b141f89521ccb8dc8da3bad5f6\"" Nov 8 00:06:24.377359 containerd[1717]: time="2025-11-08T00:06:24.377314422Z" level=info msg="StartContainer for \"1ff66c8740c0a4d67772570948107be83f0993b141f89521ccb8dc8da3bad5f6\"" Nov 8 00:06:24.381157 systemd[1]: Started cri-containerd-627aa83c1c508eb2cdf51292dd132f8e692d879a0372ecbad508820711a700c2.scope - libcontainer container 627aa83c1c508eb2cdf51292dd132f8e692d879a0372ecbad508820711a700c2. Nov 8 00:06:24.414184 systemd[1]: Started cri-containerd-2c605d99ead35b0cd2dddabed806003d58ee00017455ec818a1391057f5bdbfc.scope - libcontainer container 2c605d99ead35b0cd2dddabed806003d58ee00017455ec818a1391057f5bdbfc. Nov 8 00:06:24.434139 systemd[1]: Started cri-containerd-1ff66c8740c0a4d67772570948107be83f0993b141f89521ccb8dc8da3bad5f6.scope - libcontainer container 1ff66c8740c0a4d67772570948107be83f0993b141f89521ccb8dc8da3bad5f6. Nov 8 00:06:24.446964 containerd[1717]: time="2025-11-08T00:06:24.446116945Z" level=info msg="StartContainer for \"627aa83c1c508eb2cdf51292dd132f8e692d879a0372ecbad508820711a700c2\" returns successfully" Nov 8 00:06:24.477979 containerd[1717]: time="2025-11-08T00:06:24.477846610Z" level=info msg="StartContainer for \"2c605d99ead35b0cd2dddabed806003d58ee00017455ec818a1391057f5bdbfc\" returns successfully" Nov 8 00:06:24.512802 containerd[1717]: time="2025-11-08T00:06:24.512745991Z" level=info msg="StartContainer for \"1ff66c8740c0a4d67772570948107be83f0993b141f89521ccb8dc8da3bad5f6\" returns successfully" Nov 8 00:06:25.438396 kubelet[2797]: E1108 00:06:25.438357 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:25.438807 kubelet[2797]: E1108 00:06:25.438689 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:25.441231 kubelet[2797]: E1108 00:06:25.441207 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:26.443197 kubelet[2797]: E1108 00:06:26.443164 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:26.443533 kubelet[2797]: E1108 00:06:26.443494 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:26.443763 kubelet[2797]: E1108 00:06:26.443745 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:27.393298 kubelet[2797]: E1108 00:06:27.393251 2797 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:27.417178 kubelet[2797]: I1108 00:06:27.417141 2797 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:27.449442 kubelet[2797]: E1108 00:06:27.449404 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:27.449779 kubelet[2797]: E1108 00:06:27.449752 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:27.660968 kubelet[2797]: I1108 00:06:27.660565 2797 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:27.660968 kubelet[2797]: E1108 00:06:27.660607 2797 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-5561f33395\": node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:27.671652 kubelet[2797]: E1108 00:06:27.671605 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:27.772171 kubelet[2797]: E1108 00:06:27.772125 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:27.872690 kubelet[2797]: E1108 00:06:27.872649 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:27.973702 kubelet[2797]: E1108 00:06:27.973588 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:28.073750 kubelet[2797]: E1108 00:06:28.073694 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:28.154175 kubelet[2797]: E1108 00:06:28.154139 2797 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-5561f33395\" not found" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.174599 kubelet[2797]: E1108 00:06:28.174563 2797 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:28.298289 kubelet[2797]: I1108 00:06:28.298151 2797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.303601 kubelet[2797]: E1108 00:06:28.303051 2797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.303601 kubelet[2797]: I1108 00:06:28.303085 2797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.305586 kubelet[2797]: E1108 00:06:28.305548 2797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-5561f33395\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.305586 kubelet[2797]: I1108 00:06:28.305577 2797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.307565 kubelet[2797]: E1108 00:06:28.307531 2797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-5561f33395\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:28.979774 kubelet[2797]: I1108 00:06:28.979469 2797 apiserver.go:52] "Watching apiserver" Nov 8 00:06:28.993065 kubelet[2797]: I1108 00:06:28.993016 2797 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:06:29.957751 systemd[1]: Reloading requested from client PID 3077 ('systemctl') (unit session-9.scope)... Nov 8 00:06:29.957768 systemd[1]: Reloading... Nov 8 00:06:30.073057 zram_generator::config[3117]: No configuration found. Nov 8 00:06:30.186326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:06:30.278783 systemd[1]: Reloading finished in 320 ms. Nov 8 00:06:30.311842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:06:30.326267 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:06:30.326534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:30.326591 systemd[1]: kubelet.service: Consumed 1.369s CPU time, 123.3M memory peak, 0B memory swap peak. Nov 8 00:06:30.331444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:06:30.443830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:06:30.451909 (kubelet)[3181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:06:30.497635 kubelet[3181]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:06:30.497635 kubelet[3181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:06:30.497635 kubelet[3181]: I1108 00:06:30.497608 3181 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:06:30.503504 kubelet[3181]: I1108 00:06:30.503459 3181 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:06:30.503504 kubelet[3181]: I1108 00:06:30.503493 3181 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:06:30.503701 kubelet[3181]: I1108 00:06:30.503524 3181 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:06:30.503701 kubelet[3181]: I1108 00:06:30.503530 3181 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:06:30.503769 kubelet[3181]: I1108 00:06:30.503746 3181 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:06:30.505081 kubelet[3181]: I1108 00:06:30.505057 3181 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:06:30.507624 kubelet[3181]: I1108 00:06:30.507471 3181 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:06:30.511735 kubelet[3181]: E1108 00:06:30.511680 3181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:06:30.512131 kubelet[3181]: I1108 00:06:30.512099 3181 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:06:30.515783 kubelet[3181]: I1108 00:06:30.515752 3181 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:06:30.515974 kubelet[3181]: I1108 00:06:30.515927 3181 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:06:30.516119 kubelet[3181]: I1108 00:06:30.515970 3181 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-5561f33395","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:06:30.516210 kubelet[3181]: I1108 00:06:30.516119 3181 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:06:30.516210 kubelet[3181]: I1108 00:06:30.516130 3181 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:06:30.516210 kubelet[3181]: I1108 00:06:30.516160 3181 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:06:30.516911 kubelet[3181]: I1108 00:06:30.516892 3181 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:06:30.517065 kubelet[3181]: I1108 00:06:30.517051 3181 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:06:30.517106 kubelet[3181]: I1108 00:06:30.517074 3181 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:06:30.517106 kubelet[3181]: I1108 00:06:30.517099 3181 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:06:30.518795 kubelet[3181]: I1108 00:06:30.517116 3181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:06:30.519074 kubelet[3181]: I1108 00:06:30.519047 3181 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:06:30.519734 kubelet[3181]: I1108 00:06:30.519714 3181 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:06:30.519827 kubelet[3181]: I1108 00:06:30.519816 3181 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:06:30.524241 kubelet[3181]: I1108 00:06:30.524215 3181 server.go:1262] "Started kubelet" Nov 8 00:06:30.528604 kubelet[3181]: I1108 00:06:30.528574 3181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:06:30.543132 kubelet[3181]: I1108 00:06:30.543010 3181 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:06:30.544047 kubelet[3181]: I1108 00:06:30.543927 3181 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:06:30.547070 kubelet[3181]: I1108 00:06:30.547014 3181 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:06:30.547175 kubelet[3181]: I1108 00:06:30.547085 3181 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:06:30.547247 kubelet[3181]: I1108 00:06:30.547230 3181 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:06:30.553477 kubelet[3181]: I1108 00:06:30.553441 3181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:06:30.557639 kubelet[3181]: I1108 00:06:30.555871 3181 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:06:30.557639 kubelet[3181]: E1108 00:06:30.556116 3181 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-5561f33395\" not found" Nov 8 00:06:30.559206 kubelet[3181]: I1108 00:06:30.559171 3181 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:06:30.559316 kubelet[3181]: I1108 00:06:30.559309 3181 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:06:30.574445 kubelet[3181]: I1108 00:06:30.574241 3181 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:06:30.575871 kubelet[3181]: I1108 00:06:30.575762 3181 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:06:30.575871 kubelet[3181]: I1108 00:06:30.575801 3181 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:06:30.575871 kubelet[3181]: I1108 00:06:30.575824 3181 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:06:30.576047 kubelet[3181]: E1108 00:06:30.575882 3181 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:06:30.581324 kubelet[3181]: I1108 00:06:30.577573 3181 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:06:30.581324 kubelet[3181]: I1108 00:06:30.577601 3181 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:06:30.581324 kubelet[3181]: I1108 00:06:30.577687 3181 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:06:30.584637 kubelet[3181]: E1108 00:06:30.584596 3181 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:06:30.640779 kubelet[3181]: I1108 00:06:30.640745 3181 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:06:30.640779 kubelet[3181]: I1108 00:06:30.640766 3181 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:06:30.641089 kubelet[3181]: I1108 00:06:30.640794 3181 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:06:30.641089 kubelet[3181]: I1108 00:06:30.640970 3181 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:06:30.641089 kubelet[3181]: I1108 00:06:30.640982 3181 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:06:30.641089 kubelet[3181]: I1108 00:06:30.641000 3181 policy_none.go:49] "None policy: Start" Nov 8 00:06:30.641089 kubelet[3181]: I1108 00:06:30.641009 3181 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:06:30.641089 kubelet[3181]: I1108 00:06:30.641019 3181 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:06:30.641242 kubelet[3181]: I1108 00:06:30.641122 3181 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:06:30.641242 kubelet[3181]: I1108 00:06:30.641131 3181 policy_none.go:47] "Start" Nov 8 00:06:30.648780 kubelet[3181]: E1108 00:06:30.648241 3181 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:06:30.648780 kubelet[3181]: I1108 00:06:30.648427 3181 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:06:30.648780 kubelet[3181]: I1108 00:06:30.648438 3181 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:06:30.649445 kubelet[3181]: I1108 00:06:30.649420 3181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:06:30.653102 kubelet[3181]: E1108 00:06:30.653075 3181 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:06:30.677261 kubelet[3181]: I1108 00:06:30.677215 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.677603 kubelet[3181]: I1108 00:06:30.677231 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.677803 kubelet[3181]: I1108 00:06:30.677404 3181 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.685743 kubelet[3181]: I1108 00:06:30.685707 3181 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:06:30.689691 kubelet[3181]: I1108 00:06:30.689529 3181 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:06:30.690250 kubelet[3181]: I1108 00:06:30.690237 3181 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:06:30.751694 kubelet[3181]: I1108 00:06:30.751668 3181 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.767135 kubelet[3181]: I1108 00:06:30.766256 3181 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.767135 kubelet[3181]: I1108 00:06:30.766355 3181 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.859964 kubelet[3181]: I1108 00:06:30.859659 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd9652fe24435698a154448ce12019a6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-5561f33395\" (UID: \"cd9652fe24435698a154448ce12019a6\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.859964 kubelet[3181]: I1108 00:06:30.859720 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e85a485aa0892d8fb9f782eca6c56311-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-5561f33395\" (UID: \"e85a485aa0892d8fb9f782eca6c56311\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.859964 kubelet[3181]: I1108 00:06:30.859739 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.859964 kubelet[3181]: I1108 00:06:30.859755 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.859964 kubelet[3181]: I1108 00:06:30.859774 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.860207 kubelet[3181]: I1108 00:06:30.859789 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e85a485aa0892d8fb9f782eca6c56311-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-5561f33395\" (UID: \"e85a485aa0892d8fb9f782eca6c56311\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.860207 kubelet[3181]: I1108 00:06:30.859821 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e85a485aa0892d8fb9f782eca6c56311-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-5561f33395\" (UID: \"e85a485aa0892d8fb9f782eca6c56311\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.860207 kubelet[3181]: I1108 00:06:30.859835 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:30.860207 kubelet[3181]: I1108 00:06:30.859854 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4ce5643d9c008d42d0e7e5790365597-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-5561f33395\" (UID: \"c4ce5643d9c008d42d0e7e5790365597\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" Nov 8 00:06:31.517758 kubelet[3181]: I1108 00:06:31.517722 3181 apiserver.go:52] "Watching apiserver" Nov 8 00:06:31.560086 kubelet[3181]: I1108 00:06:31.560049 3181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:06:31.658627 kubelet[3181]: I1108 00:06:31.658384 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-5561f33395" podStartSLOduration=1.658366064 podStartE2EDuration="1.658366064s" podCreationTimestamp="2025-11-08 00:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:31.637636262 +0000 UTC m=+1.181759140" watchObservedRunningTime="2025-11-08 00:06:31.658366064 +0000 UTC m=+1.202488982" Nov 8 00:06:31.680273 kubelet[3181]: I1108 00:06:31.680059 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-5561f33395" podStartSLOduration=1.6800397839999999 podStartE2EDuration="1.680039784s" podCreationTimestamp="2025-11-08 00:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:31.659001502 +0000 UTC m=+1.203124460" watchObservedRunningTime="2025-11-08 00:06:31.680039784 +0000 UTC m=+1.224162702" Nov 8 00:06:31.694954 kubelet[3181]: I1108 00:06:31.694882 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-5561f33395" podStartSLOduration=1.694865836 podStartE2EDuration="1.694865836s" podCreationTimestamp="2025-11-08 00:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:31.681200501 +0000 UTC m=+1.225323419" watchObservedRunningTime="2025-11-08 00:06:31.694865836 +0000 UTC m=+1.238988754" Nov 8 00:06:34.911185 kubelet[3181]: I1108 00:06:34.911152 3181 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:06:34.911869 containerd[1717]: time="2025-11-08T00:06:34.911763308Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:06:34.912161 kubelet[3181]: I1108 00:06:34.912069 3181 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:06:35.600834 systemd[1]: Created slice kubepods-besteffort-pod17d1b60f_602c_471f_82af_16d07044a488.slice - libcontainer container kubepods-besteffort-pod17d1b60f_602c_471f_82af_16d07044a488.slice. Nov 8 00:06:35.691171 kubelet[3181]: I1108 00:06:35.691037 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qstsf\" (UniqueName: \"kubernetes.io/projected/17d1b60f-602c-471f-82af-16d07044a488-kube-api-access-qstsf\") pod \"kube-proxy-lb8pd\" (UID: \"17d1b60f-602c-471f-82af-16d07044a488\") " pod="kube-system/kube-proxy-lb8pd" Nov 8 00:06:35.691171 kubelet[3181]: I1108 00:06:35.691079 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17d1b60f-602c-471f-82af-16d07044a488-kube-proxy\") pod \"kube-proxy-lb8pd\" (UID: \"17d1b60f-602c-471f-82af-16d07044a488\") " pod="kube-system/kube-proxy-lb8pd" Nov 8 00:06:35.691171 kubelet[3181]: I1108 00:06:35.691095 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17d1b60f-602c-471f-82af-16d07044a488-xtables-lock\") pod \"kube-proxy-lb8pd\" (UID: \"17d1b60f-602c-471f-82af-16d07044a488\") " pod="kube-system/kube-proxy-lb8pd" Nov 8 00:06:35.691171 kubelet[3181]: I1108 00:06:35.691109 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17d1b60f-602c-471f-82af-16d07044a488-lib-modules\") pod \"kube-proxy-lb8pd\" (UID: \"17d1b60f-602c-471f-82af-16d07044a488\") " pod="kube-system/kube-proxy-lb8pd" Nov 8 00:06:35.800085 kubelet[3181]: E1108 00:06:35.799951 3181 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 8 00:06:35.800085 kubelet[3181]: E1108 00:06:35.799983 3181 projected.go:196] Error preparing data for projected volume kube-api-access-qstsf for pod kube-system/kube-proxy-lb8pd: configmap "kube-root-ca.crt" not found Nov 8 00:06:35.800085 kubelet[3181]: E1108 00:06:35.800057 3181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17d1b60f-602c-471f-82af-16d07044a488-kube-api-access-qstsf podName:17d1b60f-602c-471f-82af-16d07044a488 nodeName:}" failed. No retries permitted until 2025-11-08 00:06:36.300034751 +0000 UTC m=+5.844157629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qstsf" (UniqueName: "kubernetes.io/projected/17d1b60f-602c-471f-82af-16d07044a488-kube-api-access-qstsf") pod "kube-proxy-lb8pd" (UID: "17d1b60f-602c-471f-82af-16d07044a488") : configmap "kube-root-ca.crt" not found Nov 8 00:06:36.096820 systemd[1]: Created slice kubepods-besteffort-pod4c3525f6_1619_4516_bfdc_a4b548fd0fc9.slice - libcontainer container kubepods-besteffort-pod4c3525f6_1619_4516_bfdc_a4b548fd0fc9.slice. Nov 8 00:06:36.194605 kubelet[3181]: I1108 00:06:36.194479 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c3525f6-1619-4516-bfdc-a4b548fd0fc9-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-ldqsw\" (UID: \"4c3525f6-1619-4516-bfdc-a4b548fd0fc9\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-ldqsw" Nov 8 00:06:36.194605 kubelet[3181]: I1108 00:06:36.194546 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlf86\" (UniqueName: \"kubernetes.io/projected/4c3525f6-1619-4516-bfdc-a4b548fd0fc9-kube-api-access-vlf86\") pod \"tigera-operator-65cdcdfd6d-ldqsw\" (UID: \"4c3525f6-1619-4516-bfdc-a4b548fd0fc9\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-ldqsw" Nov 8 00:06:36.407136 containerd[1717]: time="2025-11-08T00:06:36.406624993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-ldqsw,Uid:4c3525f6-1619-4516-bfdc-a4b548fd0fc9,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:06:36.451904 containerd[1717]: time="2025-11-08T00:06:36.451612790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:36.451904 containerd[1717]: time="2025-11-08T00:06:36.451675950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:36.451904 containerd[1717]: time="2025-11-08T00:06:36.451691870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:36.451904 containerd[1717]: time="2025-11-08T00:06:36.451782590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:36.479141 systemd[1]: Started cri-containerd-c817ed7300d5d5983f3f7c03f90fc41007d2eed8017148741186da97f4f37ea0.scope - libcontainer container c817ed7300d5d5983f3f7c03f90fc41007d2eed8017148741186da97f4f37ea0. Nov 8 00:06:36.506558 containerd[1717]: time="2025-11-08T00:06:36.506476769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-ldqsw,Uid:4c3525f6-1619-4516-bfdc-a4b548fd0fc9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c817ed7300d5d5983f3f7c03f90fc41007d2eed8017148741186da97f4f37ea0\"" Nov 8 00:06:36.509414 containerd[1717]: time="2025-11-08T00:06:36.509384124Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:06:36.519502 containerd[1717]: time="2025-11-08T00:06:36.519428345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lb8pd,Uid:17d1b60f-602c-471f-82af-16d07044a488,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:36.569125 containerd[1717]: time="2025-11-08T00:06:36.569052294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:36.569491 containerd[1717]: time="2025-11-08T00:06:36.569332374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:36.569491 containerd[1717]: time="2025-11-08T00:06:36.569374853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:36.569588 containerd[1717]: time="2025-11-08T00:06:36.569478293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:36.590150 systemd[1]: Started cri-containerd-aaae550fb58aeb48dbcabbc805352f85886b8eaa868f60e59217a2c2f2605d0f.scope - libcontainer container aaae550fb58aeb48dbcabbc805352f85886b8eaa868f60e59217a2c2f2605d0f. Nov 8 00:06:36.612047 containerd[1717]: time="2025-11-08T00:06:36.611911975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lb8pd,Uid:17d1b60f-602c-471f-82af-16d07044a488,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaae550fb58aeb48dbcabbc805352f85886b8eaa868f60e59217a2c2f2605d0f\"" Nov 8 00:06:36.628443 containerd[1717]: time="2025-11-08T00:06:36.628351105Z" level=info msg="CreateContainer within sandbox \"aaae550fb58aeb48dbcabbc805352f85886b8eaa868f60e59217a2c2f2605d0f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:06:36.676850 containerd[1717]: time="2025-11-08T00:06:36.676619936Z" level=info msg="CreateContainer within sandbox \"aaae550fb58aeb48dbcabbc805352f85886b8eaa868f60e59217a2c2f2605d0f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8160ba8ed433a8386c68b8126ac54e5c0b8daf4b648030c1e80b9e41a8c094ac\"" Nov 8 00:06:36.678264 containerd[1717]: time="2025-11-08T00:06:36.678227293Z" level=info msg="StartContainer for \"8160ba8ed433a8386c68b8126ac54e5c0b8daf4b648030c1e80b9e41a8c094ac\"" Nov 8 00:06:36.702214 systemd[1]: Started cri-containerd-8160ba8ed433a8386c68b8126ac54e5c0b8daf4b648030c1e80b9e41a8c094ac.scope - libcontainer container 8160ba8ed433a8386c68b8126ac54e5c0b8daf4b648030c1e80b9e41a8c094ac. Nov 8 00:06:36.736197 containerd[1717]: time="2025-11-08T00:06:36.736065706Z" level=info msg="StartContainer for \"8160ba8ed433a8386c68b8126ac54e5c0b8daf4b648030c1e80b9e41a8c094ac\" returns successfully" Nov 8 00:06:37.311528 systemd[1]: run-containerd-runc-k8s.io-c817ed7300d5d5983f3f7c03f90fc41007d2eed8017148741186da97f4f37ea0-runc.nXLM8X.mount: Deactivated successfully. Nov 8 00:06:37.700413 kubelet[3181]: I1108 00:06:37.699857 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lb8pd" podStartSLOduration=2.69983981 podStartE2EDuration="2.69983981s" podCreationTimestamp="2025-11-08 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:37.643873513 +0000 UTC m=+7.187996431" watchObservedRunningTime="2025-11-08 00:06:37.69983981 +0000 UTC m=+7.243962728" Nov 8 00:06:39.177046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277341772.mount: Deactivated successfully. Nov 8 00:06:41.353356 containerd[1717]: time="2025-11-08T00:06:41.353297672Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:41.357584 containerd[1717]: time="2025-11-08T00:06:41.357526104Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:06:41.361386 containerd[1717]: time="2025-11-08T00:06:41.361351818Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:41.366438 containerd[1717]: time="2025-11-08T00:06:41.366395249Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:41.367113 containerd[1717]: time="2025-11-08T00:06:41.366993048Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 4.857432404s" Nov 8 00:06:41.367113 containerd[1717]: time="2025-11-08T00:06:41.367031248Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:06:41.375447 containerd[1717]: time="2025-11-08T00:06:41.375310313Z" level=info msg="CreateContainer within sandbox \"c817ed7300d5d5983f3f7c03f90fc41007d2eed8017148741186da97f4f37ea0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:06:41.394653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017240869.mount: Deactivated successfully. Nov 8 00:06:41.409074 containerd[1717]: time="2025-11-08T00:06:41.409027854Z" level=info msg="CreateContainer within sandbox \"c817ed7300d5d5983f3f7c03f90fc41007d2eed8017148741186da97f4f37ea0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d5f32e8e0487a5482204dc2e75f7a247ff17d6d30e192437e0b730092abc32cd\"" Nov 8 00:06:41.410172 containerd[1717]: time="2025-11-08T00:06:41.410077732Z" level=info msg="StartContainer for \"d5f32e8e0487a5482204dc2e75f7a247ff17d6d30e192437e0b730092abc32cd\"" Nov 8 00:06:41.441157 systemd[1]: Started cri-containerd-d5f32e8e0487a5482204dc2e75f7a247ff17d6d30e192437e0b730092abc32cd.scope - libcontainer container d5f32e8e0487a5482204dc2e75f7a247ff17d6d30e192437e0b730092abc32cd. Nov 8 00:06:41.470033 containerd[1717]: time="2025-11-08T00:06:41.469882187Z" level=info msg="StartContainer for \"d5f32e8e0487a5482204dc2e75f7a247ff17d6d30e192437e0b730092abc32cd\" returns successfully" Nov 8 00:06:42.214109 kubelet[3181]: I1108 00:06:42.213734 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-ldqsw" podStartSLOduration=1.354002039 podStartE2EDuration="6.213716279s" podCreationTimestamp="2025-11-08 00:06:36 +0000 UTC" firstStartedPulling="2025-11-08 00:06:36.508150726 +0000 UTC m=+6.052273644" lastFinishedPulling="2025-11-08 00:06:41.367865006 +0000 UTC m=+10.911987884" observedRunningTime="2025-11-08 00:06:41.652690626 +0000 UTC m=+11.196813544" watchObservedRunningTime="2025-11-08 00:06:42.213716279 +0000 UTC m=+11.757839197" Nov 8 00:06:47.421069 sudo[2244]: pam_unix(sudo:session): session closed for user root Nov 8 00:06:47.513445 sshd[2241]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:47.517176 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:51708.service: Deactivated successfully. Nov 8 00:06:47.522138 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:06:47.523675 systemd[1]: session-9.scope: Consumed 8.060s CPU time, 153.0M memory peak, 0B memory swap peak. Nov 8 00:06:47.524392 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:06:47.527528 systemd-logind[1698]: Removed session 9. Nov 8 00:06:58.579874 systemd[1]: Created slice kubepods-besteffort-pod086948d5_d750_4ae7_8373_27279330c801.slice - libcontainer container kubepods-besteffort-pod086948d5_d750_4ae7_8373_27279330c801.slice. Nov 8 00:06:58.641422 kubelet[3181]: I1108 00:06:58.641372 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7jc\" (UniqueName: \"kubernetes.io/projected/086948d5-d750-4ae7-8373-27279330c801-kube-api-access-mr7jc\") pod \"calico-typha-6d5cd4b66d-696sm\" (UID: \"086948d5-d750-4ae7-8373-27279330c801\") " pod="calico-system/calico-typha-6d5cd4b66d-696sm" Nov 8 00:06:58.641422 kubelet[3181]: I1108 00:06:58.641421 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/086948d5-d750-4ae7-8373-27279330c801-tigera-ca-bundle\") pod \"calico-typha-6d5cd4b66d-696sm\" (UID: \"086948d5-d750-4ae7-8373-27279330c801\") " pod="calico-system/calico-typha-6d5cd4b66d-696sm" Nov 8 00:06:58.641860 kubelet[3181]: I1108 00:06:58.641442 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/086948d5-d750-4ae7-8373-27279330c801-typha-certs\") pod \"calico-typha-6d5cd4b66d-696sm\" (UID: \"086948d5-d750-4ae7-8373-27279330c801\") " pod="calico-system/calico-typha-6d5cd4b66d-696sm" Nov 8 00:06:58.766334 systemd[1]: Created slice kubepods-besteffort-pod4eefc2ce_17e7_45a1_b97b_9632dbd977c1.slice - libcontainer container kubepods-besteffort-pod4eefc2ce_17e7_45a1_b97b_9632dbd977c1.slice. Nov 8 00:06:58.842069 kubelet[3181]: I1108 00:06:58.841900 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-node-certs\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842069 kubelet[3181]: I1108 00:06:58.841969 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-var-run-calico\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842069 kubelet[3181]: I1108 00:06:58.842008 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-cni-bin-dir\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842069 kubelet[3181]: I1108 00:06:58.842027 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-var-lib-calico\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842069 kubelet[3181]: I1108 00:06:58.842044 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-xtables-lock\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842287 kubelet[3181]: I1108 00:06:58.842059 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-cni-net-dir\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842287 kubelet[3181]: I1108 00:06:58.842073 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-policysync\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842287 kubelet[3181]: I1108 00:06:58.842089 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-cni-log-dir\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842287 kubelet[3181]: I1108 00:06:58.842102 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-lib-modules\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842287 kubelet[3181]: I1108 00:06:58.842115 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-flexvol-driver-host\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842395 kubelet[3181]: I1108 00:06:58.842130 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-tigera-ca-bundle\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.842395 kubelet[3181]: I1108 00:06:58.842144 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qx26\" (UniqueName: \"kubernetes.io/projected/4eefc2ce-17e7-45a1-b97b-9632dbd977c1-kube-api-access-9qx26\") pod \"calico-node-rmn8t\" (UID: \"4eefc2ce-17e7-45a1-b97b-9632dbd977c1\") " pod="calico-system/calico-node-rmn8t" Nov 8 00:06:58.893122 containerd[1717]: time="2025-11-08T00:06:58.891511591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5cd4b66d-696sm,Uid:086948d5-d750-4ae7-8373-27279330c801,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:58.947700 containerd[1717]: time="2025-11-08T00:06:58.946569649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:58.947700 containerd[1717]: time="2025-11-08T00:06:58.946638609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:58.947700 containerd[1717]: time="2025-11-08T00:06:58.946650369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:58.947700 containerd[1717]: time="2025-11-08T00:06:58.946738529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:58.952353 kubelet[3181]: E1108 00:06:58.951165 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:58.952353 kubelet[3181]: W1108 00:06:58.951191 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:58.952353 kubelet[3181]: E1108 00:06:58.951214 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:58.953117 kubelet[3181]: E1108 00:06:58.952998 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:58.953117 kubelet[3181]: W1108 00:06:58.953016 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:58.953117 kubelet[3181]: E1108 00:06:58.953032 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:58.955971 kubelet[3181]: E1108 00:06:58.954631 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:58.955971 kubelet[3181]: W1108 00:06:58.954644 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:58.955971 kubelet[3181]: E1108 00:06:58.954659 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:58.957043 kubelet[3181]: E1108 00:06:58.957023 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:58.957153 kubelet[3181]: W1108 00:06:58.957138 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:58.957212 kubelet[3181]: E1108 00:06:58.957201 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:58.958828 kubelet[3181]: E1108 00:06:58.958807 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:58.962091 kubelet[3181]: W1108 00:06:58.958878 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:58.962091 kubelet[3181]: E1108 00:06:58.958896 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:58.966240 kubelet[3181]: E1108 00:06:58.965994 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:06:58.982631 kubelet[3181]: E1108 00:06:58.982571 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:58.983160 kubelet[3181]: W1108 00:06:58.982918 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:58.983160 kubelet[3181]: E1108 00:06:58.983042 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:58.995356 systemd[1]: Started cri-containerd-0be4e737effdd9d1dc803d215fb02de7d5459727d219fe3830a5972fc83f6186.scope - libcontainer container 0be4e737effdd9d1dc803d215fb02de7d5459727d219fe3830a5972fc83f6186. Nov 8 00:06:59.038790 kubelet[3181]: E1108 00:06:59.038427 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.038790 kubelet[3181]: W1108 00:06:59.038454 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.038790 kubelet[3181]: E1108 00:06:59.038477 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.039396 kubelet[3181]: E1108 00:06:59.039132 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.040173 kubelet[3181]: W1108 00:06:59.039146 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.040173 kubelet[3181]: E1108 00:06:59.039732 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.041989 kubelet[3181]: E1108 00:06:59.041886 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.041989 kubelet[3181]: W1108 00:06:59.041903 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.041989 kubelet[3181]: E1108 00:06:59.041922 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.045322 kubelet[3181]: E1108 00:06:59.044466 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.045322 kubelet[3181]: W1108 00:06:59.044485 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.045322 kubelet[3181]: E1108 00:06:59.044502 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.046029 kubelet[3181]: E1108 00:06:59.045684 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.046205 kubelet[3181]: W1108 00:06:59.046129 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.046205 kubelet[3181]: E1108 00:06:59.046157 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.047040 kubelet[3181]: E1108 00:06:59.047023 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.047183 kubelet[3181]: W1108 00:06:59.047121 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.047183 kubelet[3181]: E1108 00:06:59.047140 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.047432 kubelet[3181]: E1108 00:06:59.047417 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.047613 kubelet[3181]: W1108 00:06:59.047598 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.047868 kubelet[3181]: E1108 00:06:59.047757 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.051345 kubelet[3181]: E1108 00:06:59.049929 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.051345 kubelet[3181]: W1108 00:06:59.050470 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.051345 kubelet[3181]: E1108 00:06:59.050487 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.052856 kubelet[3181]: E1108 00:06:59.052757 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.054013 kubelet[3181]: W1108 00:06:59.053607 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.054013 kubelet[3181]: E1108 00:06:59.053643 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.055122 kubelet[3181]: E1108 00:06:59.054818 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.055122 kubelet[3181]: W1108 00:06:59.054835 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.055122 kubelet[3181]: E1108 00:06:59.054853 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.057431 kubelet[3181]: E1108 00:06:59.057206 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.057431 kubelet[3181]: W1108 00:06:59.057224 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.057431 kubelet[3181]: E1108 00:06:59.057242 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.058582 kubelet[3181]: E1108 00:06:59.058157 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.058582 kubelet[3181]: W1108 00:06:59.058261 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.058582 kubelet[3181]: E1108 00:06:59.058279 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.059106 kubelet[3181]: E1108 00:06:59.058863 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.059106 kubelet[3181]: W1108 00:06:59.058966 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.059106 kubelet[3181]: E1108 00:06:59.058982 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.060105 kubelet[3181]: E1108 00:06:59.059835 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.060105 kubelet[3181]: W1108 00:06:59.059850 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.060105 kubelet[3181]: E1108 00:06:59.059862 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.061021 kubelet[3181]: E1108 00:06:59.060756 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.061021 kubelet[3181]: W1108 00:06:59.060769 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.061021 kubelet[3181]: E1108 00:06:59.060781 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.061807 kubelet[3181]: E1108 00:06:59.061585 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.061807 kubelet[3181]: W1108 00:06:59.061601 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.061807 kubelet[3181]: E1108 00:06:59.061613 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.062599 kubelet[3181]: E1108 00:06:59.062317 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.062599 kubelet[3181]: W1108 00:06:59.062332 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.062599 kubelet[3181]: E1108 00:06:59.062344 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.063282 kubelet[3181]: E1108 00:06:59.063071 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.063282 kubelet[3181]: W1108 00:06:59.063088 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.063282 kubelet[3181]: E1108 00:06:59.063100 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.064273 kubelet[3181]: E1108 00:06:59.064248 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.064861 kubelet[3181]: W1108 00:06:59.064712 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.064861 kubelet[3181]: E1108 00:06:59.064736 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.067123 kubelet[3181]: E1108 00:06:59.066743 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.067123 kubelet[3181]: W1108 00:06:59.066765 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.067123 kubelet[3181]: E1108 00:06:59.066778 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.067525 kubelet[3181]: E1108 00:06:59.067465 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.067706 kubelet[3181]: W1108 00:06:59.067625 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.067706 kubelet[3181]: E1108 00:06:59.067644 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.067907 containerd[1717]: time="2025-11-08T00:06:59.067619665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5cd4b66d-696sm,Uid:086948d5-d750-4ae7-8373-27279330c801,Namespace:calico-system,Attempt:0,} returns sandbox id \"0be4e737effdd9d1dc803d215fb02de7d5459727d219fe3830a5972fc83f6186\"" Nov 8 00:06:59.068115 kubelet[3181]: I1108 00:06:59.067752 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70822f24-312d-4073-b204-5c6b6a26eb84-kubelet-dir\") pod \"csi-node-driver-8jr45\" (UID: \"70822f24-312d-4073-b204-5c6b6a26eb84\") " pod="calico-system/csi-node-driver-8jr45" Nov 8 00:06:59.068798 kubelet[3181]: E1108 00:06:59.068721 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.068798 kubelet[3181]: W1108 00:06:59.068737 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.068798 kubelet[3181]: E1108 00:06:59.068752 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.069137 kubelet[3181]: I1108 00:06:59.068991 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/70822f24-312d-4073-b204-5c6b6a26eb84-socket-dir\") pod \"csi-node-driver-8jr45\" (UID: \"70822f24-312d-4073-b204-5c6b6a26eb84\") " pod="calico-system/csi-node-driver-8jr45" Nov 8 00:06:59.069687 kubelet[3181]: E1108 00:06:59.069212 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.070891 kubelet[3181]: W1108 00:06:59.069690 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.070891 kubelet[3181]: E1108 00:06:59.069705 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.070891 kubelet[3181]: E1108 00:06:59.069952 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.070891 kubelet[3181]: W1108 00:06:59.069964 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.070891 kubelet[3181]: E1108 00:06:59.069974 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.070891 kubelet[3181]: E1108 00:06:59.070392 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.070891 kubelet[3181]: W1108 00:06:59.070406 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.070891 kubelet[3181]: E1108 00:06:59.070524 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.070891 kubelet[3181]: I1108 00:06:59.070545 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/70822f24-312d-4073-b204-5c6b6a26eb84-registration-dir\") pod \"csi-node-driver-8jr45\" (UID: \"70822f24-312d-4073-b204-5c6b6a26eb84\") " pod="calico-system/csi-node-driver-8jr45" Nov 8 00:06:59.071217 kubelet[3181]: E1108 00:06:59.071174 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.071217 kubelet[3181]: W1108 00:06:59.071188 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.071217 kubelet[3181]: E1108 00:06:59.071200 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.071217 kubelet[3181]: I1108 00:06:59.071216 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xm8\" (UniqueName: \"kubernetes.io/projected/70822f24-312d-4073-b204-5c6b6a26eb84-kube-api-access-t5xm8\") pod \"csi-node-driver-8jr45\" (UID: \"70822f24-312d-4073-b204-5c6b6a26eb84\") " pod="calico-system/csi-node-driver-8jr45" Nov 8 00:06:59.072244 containerd[1717]: time="2025-11-08T00:06:59.071817538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:06:59.072971 kubelet[3181]: E1108 00:06:59.072505 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.072971 kubelet[3181]: W1108 00:06:59.072609 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.072971 kubelet[3181]: E1108 00:06:59.072627 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.072971 kubelet[3181]: I1108 00:06:59.072652 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/70822f24-312d-4073-b204-5c6b6a26eb84-varrun\") pod \"csi-node-driver-8jr45\" (UID: \"70822f24-312d-4073-b204-5c6b6a26eb84\") " pod="calico-system/csi-node-driver-8jr45" Nov 8 00:06:59.073118 kubelet[3181]: E1108 00:06:59.073004 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.073118 kubelet[3181]: W1108 00:06:59.073021 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.073118 kubelet[3181]: E1108 00:06:59.073032 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.074234 kubelet[3181]: E1108 00:06:59.074201 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.074234 kubelet[3181]: W1108 00:06:59.074233 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.074464 kubelet[3181]: E1108 00:06:59.074250 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.074590 kubelet[3181]: E1108 00:06:59.074568 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.074590 kubelet[3181]: W1108 00:06:59.074582 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.074699 kubelet[3181]: E1108 00:06:59.074597 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.074795 kubelet[3181]: E1108 00:06:59.074781 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.074845 kubelet[3181]: W1108 00:06:59.074794 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.074845 kubelet[3181]: E1108 00:06:59.074805 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.075155 kubelet[3181]: E1108 00:06:59.075137 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.075155 kubelet[3181]: W1108 00:06:59.075151 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.076558 kubelet[3181]: E1108 00:06:59.076518 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.076840 kubelet[3181]: E1108 00:06:59.076826 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.076840 kubelet[3181]: W1108 00:06:59.076839 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.077031 kubelet[3181]: E1108 00:06:59.076853 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.077171 kubelet[3181]: E1108 00:06:59.077148 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.077171 kubelet[3181]: W1108 00:06:59.077162 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.077571 kubelet[3181]: E1108 00:06:59.077173 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.077571 kubelet[3181]: E1108 00:06:59.077320 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.077571 kubelet[3181]: W1108 00:06:59.077328 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.077571 kubelet[3181]: E1108 00:06:59.077336 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.086380 containerd[1717]: time="2025-11-08T00:06:59.086337871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rmn8t,Uid:4eefc2ce-17e7-45a1-b97b-9632dbd977c1,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:59.133425 containerd[1717]: time="2025-11-08T00:06:59.133196904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:59.133425 containerd[1717]: time="2025-11-08T00:06:59.133265024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:59.133425 containerd[1717]: time="2025-11-08T00:06:59.133281384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:59.133425 containerd[1717]: time="2025-11-08T00:06:59.133369144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:59.157146 systemd[1]: Started cri-containerd-a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b.scope - libcontainer container a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b. Nov 8 00:06:59.174564 kubelet[3181]: E1108 00:06:59.174482 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.174564 kubelet[3181]: W1108 00:06:59.174510 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.174920 kubelet[3181]: E1108 00:06:59.174745 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.175516 kubelet[3181]: E1108 00:06:59.175352 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.175516 kubelet[3181]: W1108 00:06:59.175368 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.175516 kubelet[3181]: E1108 00:06:59.175382 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.175805 kubelet[3181]: E1108 00:06:59.175786 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.175805 kubelet[3181]: W1108 00:06:59.175803 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.175922 kubelet[3181]: E1108 00:06:59.175816 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.176525 kubelet[3181]: E1108 00:06:59.176503 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.176525 kubelet[3181]: W1108 00:06:59.176522 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.176816 kubelet[3181]: E1108 00:06:59.176538 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.178350 kubelet[3181]: E1108 00:06:59.178327 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.178350 kubelet[3181]: W1108 00:06:59.178348 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.178560 kubelet[3181]: E1108 00:06:59.178364 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.178762 kubelet[3181]: E1108 00:06:59.178747 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.178803 kubelet[3181]: W1108 00:06:59.178761 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.178803 kubelet[3181]: E1108 00:06:59.178773 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.179786 kubelet[3181]: E1108 00:06:59.179763 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.179955 kubelet[3181]: W1108 00:06:59.179782 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.180011 kubelet[3181]: E1108 00:06:59.179989 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.180395 kubelet[3181]: E1108 00:06:59.180378 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.180395 kubelet[3181]: W1108 00:06:59.180393 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.180720 kubelet[3181]: E1108 00:06:59.180405 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.184074 kubelet[3181]: E1108 00:06:59.182526 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.184074 kubelet[3181]: W1108 00:06:59.182550 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.184074 kubelet[3181]: E1108 00:06:59.182729 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.185030 kubelet[3181]: E1108 00:06:59.184819 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.185030 kubelet[3181]: W1108 00:06:59.184836 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.185030 kubelet[3181]: E1108 00:06:59.184865 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.187333 kubelet[3181]: E1108 00:06:59.187159 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.187333 kubelet[3181]: W1108 00:06:59.187188 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.187333 kubelet[3181]: E1108 00:06:59.187206 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.188384 kubelet[3181]: E1108 00:06:59.188176 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.188384 kubelet[3181]: W1108 00:06:59.188205 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.188384 kubelet[3181]: E1108 00:06:59.188221 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.189096 kubelet[3181]: E1108 00:06:59.189041 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.189096 kubelet[3181]: W1108 00:06:59.189057 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.189096 kubelet[3181]: E1108 00:06:59.189071 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.192827 containerd[1717]: time="2025-11-08T00:06:59.192396275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rmn8t,Uid:4eefc2ce-17e7-45a1-b97b-9632dbd977c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\"" Nov 8 00:06:59.192929 kubelet[3181]: E1108 00:06:59.192723 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.192929 kubelet[3181]: W1108 00:06:59.192736 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.192929 kubelet[3181]: E1108 00:06:59.192752 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.193982 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195079 kubelet[3181]: W1108 00:06:59.194002 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.194017 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.194172 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195079 kubelet[3181]: W1108 00:06:59.194180 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.194189 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.194316 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195079 kubelet[3181]: W1108 00:06:59.194323 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.194330 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195079 kubelet[3181]: E1108 00:06:59.194535 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195355 kubelet[3181]: W1108 00:06:59.194543 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195355 kubelet[3181]: E1108 00:06:59.194552 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195355 kubelet[3181]: E1108 00:06:59.194728 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195355 kubelet[3181]: W1108 00:06:59.194737 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195355 kubelet[3181]: E1108 00:06:59.194750 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195355 kubelet[3181]: E1108 00:06:59.194863 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195355 kubelet[3181]: W1108 00:06:59.194872 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195355 kubelet[3181]: E1108 00:06:59.194886 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195355 kubelet[3181]: E1108 00:06:59.195077 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195355 kubelet[3181]: W1108 00:06:59.195085 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195568 kubelet[3181]: E1108 00:06:59.195093 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195568 kubelet[3181]: E1108 00:06:59.195240 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195568 kubelet[3181]: W1108 00:06:59.195247 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195568 kubelet[3181]: E1108 00:06:59.195255 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195568 kubelet[3181]: E1108 00:06:59.195416 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195568 kubelet[3181]: W1108 00:06:59.195423 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195568 kubelet[3181]: E1108 00:06:59.195432 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195568 kubelet[3181]: E1108 00:06:59.195558 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195568 kubelet[3181]: W1108 00:06:59.195565 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195763 kubelet[3181]: E1108 00:06:59.195572 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.195865 kubelet[3181]: E1108 00:06:59.195837 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.195865 kubelet[3181]: W1108 00:06:59.195853 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.195921 kubelet[3181]: E1108 00:06:59.195868 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:59.203974 kubelet[3181]: E1108 00:06:59.203876 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:59.203974 kubelet[3181]: W1108 00:06:59.203966 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:59.204145 kubelet[3181]: E1108 00:06:59.203988 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:00.513816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669325647.mount: Deactivated successfully. Nov 8 00:07:00.580334 kubelet[3181]: E1108 00:07:00.577660 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:01.074572 containerd[1717]: time="2025-11-08T00:07:01.073822834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:01.077643 containerd[1717]: time="2025-11-08T00:07:01.077579587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:07:01.085962 containerd[1717]: time="2025-11-08T00:07:01.085196213Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:01.090962 containerd[1717]: time="2025-11-08T00:07:01.090109444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:01.091191 containerd[1717]: time="2025-11-08T00:07:01.091161322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.019295384s" Nov 8 00:07:01.091330 containerd[1717]: time="2025-11-08T00:07:01.091220562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:07:01.095204 containerd[1717]: time="2025-11-08T00:07:01.095162955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:07:01.124660 containerd[1717]: time="2025-11-08T00:07:01.124519260Z" level=info msg="CreateContainer within sandbox \"0be4e737effdd9d1dc803d215fb02de7d5459727d219fe3830a5972fc83f6186\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:07:01.175127 containerd[1717]: time="2025-11-08T00:07:01.174986527Z" level=info msg="CreateContainer within sandbox \"0be4e737effdd9d1dc803d215fb02de7d5459727d219fe3830a5972fc83f6186\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bd2c9e922e5791a0c8e3df2f6fffa8ab388b816ce3f2d2ef866d71c614dd888b\"" Nov 8 00:07:01.176531 containerd[1717]: time="2025-11-08T00:07:01.176431404Z" level=info msg="StartContainer for \"bd2c9e922e5791a0c8e3df2f6fffa8ab388b816ce3f2d2ef866d71c614dd888b\"" Nov 8 00:07:01.206144 systemd[1]: Started cri-containerd-bd2c9e922e5791a0c8e3df2f6fffa8ab388b816ce3f2d2ef866d71c614dd888b.scope - libcontainer container bd2c9e922e5791a0c8e3df2f6fffa8ab388b816ce3f2d2ef866d71c614dd888b. Nov 8 00:07:01.266228 containerd[1717]: time="2025-11-08T00:07:01.266169038Z" level=info msg="StartContainer for \"bd2c9e922e5791a0c8e3df2f6fffa8ab388b816ce3f2d2ef866d71c614dd888b\" returns successfully" Nov 8 00:07:01.783549 kubelet[3181]: E1108 00:07:01.783410 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.783549 kubelet[3181]: W1108 00:07:01.783435 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.783549 kubelet[3181]: E1108 00:07:01.783455 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.784198 kubelet[3181]: E1108 00:07:01.784042 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.784198 kubelet[3181]: W1108 00:07:01.784060 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.784198 kubelet[3181]: E1108 00:07:01.784105 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.784374 kubelet[3181]: E1108 00:07:01.784362 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.784432 kubelet[3181]: W1108 00:07:01.784422 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.784797 kubelet[3181]: E1108 00:07:01.784476 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.784927 kubelet[3181]: E1108 00:07:01.784913 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.785012 kubelet[3181]: W1108 00:07:01.785000 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.785066 kubelet[3181]: E1108 00:07:01.785055 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.785430 kubelet[3181]: E1108 00:07:01.785330 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.785430 kubelet[3181]: W1108 00:07:01.785343 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.785430 kubelet[3181]: E1108 00:07:01.785354 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.785600 kubelet[3181]: E1108 00:07:01.785588 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.785651 kubelet[3181]: W1108 00:07:01.785641 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.785698 kubelet[3181]: E1108 00:07:01.785688 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.785960 kubelet[3181]: E1108 00:07:01.785917 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.785960 kubelet[3181]: W1108 00:07:01.785927 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.786140 kubelet[3181]: E1108 00:07:01.785947 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.786347 kubelet[3181]: E1108 00:07:01.786230 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.786347 kubelet[3181]: W1108 00:07:01.786240 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.786347 kubelet[3181]: E1108 00:07:01.786250 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.786507 kubelet[3181]: E1108 00:07:01.786495 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.786562 kubelet[3181]: W1108 00:07:01.786552 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.786641 kubelet[3181]: E1108 00:07:01.786631 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.787103 kubelet[3181]: E1108 00:07:01.786988 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.787103 kubelet[3181]: W1108 00:07:01.787002 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.787103 kubelet[3181]: E1108 00:07:01.787013 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.787284 kubelet[3181]: E1108 00:07:01.787271 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.787436 kubelet[3181]: W1108 00:07:01.787334 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.787436 kubelet[3181]: E1108 00:07:01.787350 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.787569 kubelet[3181]: E1108 00:07:01.787557 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.787620 kubelet[3181]: W1108 00:07:01.787610 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.787765 kubelet[3181]: E1108 00:07:01.787669 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.787866 kubelet[3181]: E1108 00:07:01.787855 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.787924 kubelet[3181]: W1108 00:07:01.787913 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.788013 kubelet[3181]: E1108 00:07:01.787999 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.788317 kubelet[3181]: E1108 00:07:01.788227 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.788317 kubelet[3181]: W1108 00:07:01.788238 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.788317 kubelet[3181]: E1108 00:07:01.788248 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.788575 kubelet[3181]: E1108 00:07:01.788484 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.788575 kubelet[3181]: W1108 00:07:01.788496 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.788575 kubelet[3181]: E1108 00:07:01.788507 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.808996 kubelet[3181]: E1108 00:07:01.808955 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.808996 kubelet[3181]: W1108 00:07:01.808984 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.808996 kubelet[3181]: E1108 00:07:01.809007 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.809305 kubelet[3181]: E1108 00:07:01.809277 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.809305 kubelet[3181]: W1108 00:07:01.809289 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.809357 kubelet[3181]: E1108 00:07:01.809348 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.809731 kubelet[3181]: E1108 00:07:01.809707 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.809731 kubelet[3181]: W1108 00:07:01.809727 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.809804 kubelet[3181]: E1108 00:07:01.809740 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.810074 kubelet[3181]: E1108 00:07:01.810057 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.810074 kubelet[3181]: W1108 00:07:01.810071 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.810161 kubelet[3181]: E1108 00:07:01.810082 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.810346 kubelet[3181]: E1108 00:07:01.810326 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.810346 kubelet[3181]: W1108 00:07:01.810341 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.810446 kubelet[3181]: E1108 00:07:01.810353 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.810565 kubelet[3181]: E1108 00:07:01.810551 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.810565 kubelet[3181]: W1108 00:07:01.810563 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.810623 kubelet[3181]: E1108 00:07:01.810573 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.811142 kubelet[3181]: E1108 00:07:01.811118 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.811142 kubelet[3181]: W1108 00:07:01.811138 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.811231 kubelet[3181]: E1108 00:07:01.811151 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.811424 kubelet[3181]: E1108 00:07:01.811404 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.811424 kubelet[3181]: W1108 00:07:01.811420 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.811575 kubelet[3181]: E1108 00:07:01.811446 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.811697 kubelet[3181]: E1108 00:07:01.811681 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.811697 kubelet[3181]: W1108 00:07:01.811693 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.811764 kubelet[3181]: E1108 00:07:01.811705 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.811898 kubelet[3181]: E1108 00:07:01.811884 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.811898 kubelet[3181]: W1108 00:07:01.811895 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.811985 kubelet[3181]: E1108 00:07:01.811904 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.812171 kubelet[3181]: E1108 00:07:01.812152 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.812226 kubelet[3181]: W1108 00:07:01.812165 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.812264 kubelet[3181]: E1108 00:07:01.812226 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.812432 kubelet[3181]: E1108 00:07:01.812418 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.812432 kubelet[3181]: W1108 00:07:01.812430 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.812491 kubelet[3181]: E1108 00:07:01.812440 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.812682 kubelet[3181]: E1108 00:07:01.812667 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.812682 kubelet[3181]: W1108 00:07:01.812679 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.812753 kubelet[3181]: E1108 00:07:01.812687 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.813065 kubelet[3181]: E1108 00:07:01.813047 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.813065 kubelet[3181]: W1108 00:07:01.813062 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.813138 kubelet[3181]: E1108 00:07:01.813072 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.813400 kubelet[3181]: E1108 00:07:01.813379 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.813400 kubelet[3181]: W1108 00:07:01.813394 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.813503 kubelet[3181]: E1108 00:07:01.813404 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.813696 kubelet[3181]: E1108 00:07:01.813677 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.813696 kubelet[3181]: W1108 00:07:01.813689 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.813756 kubelet[3181]: E1108 00:07:01.813700 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.814105 kubelet[3181]: E1108 00:07:01.814083 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.814105 kubelet[3181]: W1108 00:07:01.814099 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.814168 kubelet[3181]: E1108 00:07:01.814112 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:01.814433 kubelet[3181]: E1108 00:07:01.814412 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:07:01.814433 kubelet[3181]: W1108 00:07:01.814428 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:07:01.814504 kubelet[3181]: E1108 00:07:01.814438 3181 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:07:02.395368 containerd[1717]: time="2025-11-08T00:07:02.395277350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:02.399005 containerd[1717]: time="2025-11-08T00:07:02.398958583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:07:02.403772 containerd[1717]: time="2025-11-08T00:07:02.403717814Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:02.409127 containerd[1717]: time="2025-11-08T00:07:02.409051444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:02.411003 containerd[1717]: time="2025-11-08T00:07:02.410694161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.315481246s" Nov 8 00:07:02.411003 containerd[1717]: time="2025-11-08T00:07:02.410832561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:07:02.420319 containerd[1717]: time="2025-11-08T00:07:02.420252744Z" level=info msg="CreateContainer within sandbox \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:07:02.463013 containerd[1717]: time="2025-11-08T00:07:02.462877465Z" level=info msg="CreateContainer within sandbox \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92\"" Nov 8 00:07:02.463589 containerd[1717]: time="2025-11-08T00:07:02.463477584Z" level=info msg="StartContainer for \"ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92\"" Nov 8 00:07:02.497245 systemd[1]: Started cri-containerd-ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92.scope - libcontainer container ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92. Nov 8 00:07:02.532043 containerd[1717]: time="2025-11-08T00:07:02.531804657Z" level=info msg="StartContainer for \"ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92\" returns successfully" Nov 8 00:07:02.538781 systemd[1]: cri-containerd-ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92.scope: Deactivated successfully. Nov 8 00:07:02.566976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92-rootfs.mount: Deactivated successfully. Nov 8 00:07:02.576906 kubelet[3181]: E1108 00:07:02.576856 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:02.991498 kubelet[3181]: I1108 00:07:02.688154 3181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:07:02.991498 kubelet[3181]: I1108 00:07:02.707124 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d5cd4b66d-696sm" podStartSLOduration=2.6863537539999998 podStartE2EDuration="4.707106135s" podCreationTimestamp="2025-11-08 00:06:58 +0000 UTC" firstStartedPulling="2025-11-08 00:06:59.071356419 +0000 UTC m=+28.615479337" lastFinishedPulling="2025-11-08 00:07:01.09210884 +0000 UTC m=+30.636231718" observedRunningTime="2025-11-08 00:07:01.704804627 +0000 UTC m=+31.248927545" watchObservedRunningTime="2025-11-08 00:07:02.707106135 +0000 UTC m=+32.251229053" Nov 8 00:07:03.550838 containerd[1717]: time="2025-11-08T00:07:03.550764718Z" level=info msg="shim disconnected" id=ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92 namespace=k8s.io Nov 8 00:07:03.550838 containerd[1717]: time="2025-11-08T00:07:03.550821318Z" level=warning msg="cleaning up after shim disconnected" id=ae9b7480a1b418e925a864d2bd094e1b301c7671f8114306ef86cc0219748a92 namespace=k8s.io Nov 8 00:07:03.550838 containerd[1717]: time="2025-11-08T00:07:03.550829238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:07:03.693624 containerd[1717]: time="2025-11-08T00:07:03.693358739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:07:04.576685 kubelet[3181]: E1108 00:07:04.576323 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:06.071982 containerd[1717]: time="2025-11-08T00:07:06.071434687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:06.078085 containerd[1717]: time="2025-11-08T00:07:06.078045875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:07:06.087874 containerd[1717]: time="2025-11-08T00:07:06.087834217Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:06.094768 containerd[1717]: time="2025-11-08T00:07:06.094711765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:06.098896 containerd[1717]: time="2025-11-08T00:07:06.098824277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.405420378s" Nov 8 00:07:06.098999 containerd[1717]: time="2025-11-08T00:07:06.098899637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:07:06.109833 containerd[1717]: time="2025-11-08T00:07:06.109621897Z" level=info msg="CreateContainer within sandbox \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:07:06.147083 containerd[1717]: time="2025-11-08T00:07:06.147036069Z" level=info msg="CreateContainer within sandbox \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a\"" Nov 8 00:07:06.147999 containerd[1717]: time="2025-11-08T00:07:06.147972468Z" level=info msg="StartContainer for \"b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a\"" Nov 8 00:07:06.182147 systemd[1]: Started cri-containerd-b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a.scope - libcontainer container b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a. Nov 8 00:07:06.221876 containerd[1717]: time="2025-11-08T00:07:06.221831133Z" level=info msg="StartContainer for \"b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a\" returns successfully" Nov 8 00:07:06.577266 kubelet[3181]: E1108 00:07:06.577204 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:07.415799 containerd[1717]: time="2025-11-08T00:07:07.415752438Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:07:07.417600 systemd[1]: cri-containerd-b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a.scope: Deactivated successfully. Nov 8 00:07:07.426390 kubelet[3181]: I1108 00:07:07.424796 3181 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:07:07.449237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a-rootfs.mount: Deactivated successfully. Nov 8 00:07:08.294644 systemd[1]: Created slice kubepods-besteffort-pod41d4b3ef_ef6f_40aa_890a_556514760a53.slice - libcontainer container kubepods-besteffort-pod41d4b3ef_ef6f_40aa_890a_556514760a53.slice. Nov 8 00:07:08.304834 containerd[1717]: time="2025-11-08T00:07:08.304535979Z" level=info msg="shim disconnected" id=b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a namespace=k8s.io Nov 8 00:07:08.304834 containerd[1717]: time="2025-11-08T00:07:08.304830939Z" level=warning msg="cleaning up after shim disconnected" id=b21fafde664065cd0315ad4c6224e70db754e7cacbef6a01b9fea85b3ad8aa4a namespace=k8s.io Nov 8 00:07:08.305026 containerd[1717]: time="2025-11-08T00:07:08.304843859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:07:08.306895 systemd[1]: Created slice kubepods-besteffort-pod70822f24_312d_4073_b204_5c6b6a26eb84.slice - libcontainer container kubepods-besteffort-pod70822f24_312d_4073_b204_5c6b6a26eb84.slice. Nov 8 00:07:08.319639 systemd[1]: Created slice kubepods-burstable-podbcb0f449_555a_4f1a_a70d_fed8686a31f6.slice - libcontainer container kubepods-burstable-podbcb0f449_555a_4f1a_a70d_fed8686a31f6.slice. Nov 8 00:07:08.320817 containerd[1717]: time="2025-11-08T00:07:08.319909311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jr45,Uid:70822f24-312d-4073-b204-5c6b6a26eb84,Namespace:calico-system,Attempt:0,}" Nov 8 00:07:08.328423 systemd[1]: Created slice kubepods-besteffort-podaad8189b_54ce_422e_a68f_46b67abadfe8.slice - libcontainer container kubepods-besteffort-podaad8189b_54ce_422e_a68f_46b67abadfe8.slice. Nov 8 00:07:08.341308 systemd[1]: Created slice kubepods-besteffort-pod1eab03fd_9695_41da_8445_49749eaa2864.slice - libcontainer container kubepods-besteffort-pod1eab03fd_9695_41da_8445_49749eaa2864.slice. Nov 8 00:07:08.362451 systemd[1]: Created slice kubepods-burstable-podb089b199_ec3e_4716_9f14_e24ffa6fbbc3.slice - libcontainer container kubepods-burstable-podb089b199_ec3e_4716_9f14_e24ffa6fbbc3.slice. Nov 8 00:07:08.365193 systemd[1]: Created slice kubepods-besteffort-pod8dcb36b7_7066_4355_aa27_d1ae27c36df5.slice - libcontainer container kubepods-besteffort-pod8dcb36b7_7066_4355_aa27_d1ae27c36df5.slice. Nov 8 00:07:08.396822 systemd[1]: Created slice kubepods-besteffort-podfb0c2d00_8a9d_4218_9dbc_6f07fda31565.slice - libcontainer container kubepods-besteffort-podfb0c2d00_8a9d_4218_9dbc_6f07fda31565.slice. Nov 8 00:07:08.406167 systemd[1]: Created slice kubepods-besteffort-pod18241a6e_9a2c_44bf_b122_af7f53eb5a3f.slice - libcontainer container kubepods-besteffort-pod18241a6e_9a2c_44bf_b122_af7f53eb5a3f.slice. Nov 8 00:07:08.428771 kubelet[3181]: I1108 00:07:08.428725 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aad8189b-54ce-422e-a68f-46b67abadfe8-calico-apiserver-certs\") pod \"calico-apiserver-d7c9d7554-7phdh\" (UID: \"aad8189b-54ce-422e-a68f-46b67abadfe8\") " pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" Nov 8 00:07:08.429224 kubelet[3181]: I1108 00:07:08.428789 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b089b199-ec3e-4716-9f14-e24ffa6fbbc3-config-volume\") pod \"coredns-66bc5c9577-x5nnq\" (UID: \"b089b199-ec3e-4716-9f14-e24ffa6fbbc3\") " pod="kube-system/coredns-66bc5c9577-x5nnq" Nov 8 00:07:08.429224 kubelet[3181]: I1108 00:07:08.428807 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjx4p\" (UniqueName: \"kubernetes.io/projected/b089b199-ec3e-4716-9f14-e24ffa6fbbc3-kube-api-access-kjx4p\") pod \"coredns-66bc5c9577-x5nnq\" (UID: \"b089b199-ec3e-4716-9f14-e24ffa6fbbc3\") " pod="kube-system/coredns-66bc5c9577-x5nnq" Nov 8 00:07:08.429224 kubelet[3181]: I1108 00:07:08.428823 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8dcb36b7-7066-4355-aa27-d1ae27c36df5-calico-apiserver-certs\") pod \"calico-apiserver-847b7fbf74-mcdn7\" (UID: \"8dcb36b7-7066-4355-aa27-d1ae27c36df5\") " pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" Nov 8 00:07:08.429224 kubelet[3181]: I1108 00:07:08.428840 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqmw\" (UniqueName: \"kubernetes.io/projected/aad8189b-54ce-422e-a68f-46b67abadfe8-kube-api-access-gtqmw\") pod \"calico-apiserver-d7c9d7554-7phdh\" (UID: \"aad8189b-54ce-422e-a68f-46b67abadfe8\") " pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" Nov 8 00:07:08.429224 kubelet[3181]: I1108 00:07:08.428857 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lnz\" (UniqueName: \"kubernetes.io/projected/8dcb36b7-7066-4355-aa27-d1ae27c36df5-kube-api-access-d5lnz\") pod \"calico-apiserver-847b7fbf74-mcdn7\" (UID: \"8dcb36b7-7066-4355-aa27-d1ae27c36df5\") " pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" Nov 8 00:07:08.429401 kubelet[3181]: I1108 00:07:08.428873 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcb0f449-555a-4f1a-a70d-fed8686a31f6-config-volume\") pod \"coredns-66bc5c9577-dpwnt\" (UID: \"bcb0f449-555a-4f1a-a70d-fed8686a31f6\") " pod="kube-system/coredns-66bc5c9577-dpwnt" Nov 8 00:07:08.429401 kubelet[3181]: I1108 00:07:08.428887 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1eab03fd-9695-41da-8445-49749eaa2864-calico-apiserver-certs\") pod \"calico-apiserver-d7c9d7554-7cc89\" (UID: \"1eab03fd-9695-41da-8445-49749eaa2864\") " pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" Nov 8 00:07:08.429401 kubelet[3181]: I1108 00:07:08.428906 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktxhj\" (UniqueName: \"kubernetes.io/projected/41d4b3ef-ef6f-40aa-890a-556514760a53-kube-api-access-ktxhj\") pod \"calico-kube-controllers-74489dd677-kvxft\" (UID: \"41d4b3ef-ef6f-40aa-890a-556514760a53\") " pod="calico-system/calico-kube-controllers-74489dd677-kvxft" Nov 8 00:07:08.429401 kubelet[3181]: I1108 00:07:08.428920 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvndq\" (UniqueName: \"kubernetes.io/projected/bcb0f449-555a-4f1a-a70d-fed8686a31f6-kube-api-access-cvndq\") pod \"coredns-66bc5c9577-dpwnt\" (UID: \"bcb0f449-555a-4f1a-a70d-fed8686a31f6\") " pod="kube-system/coredns-66bc5c9577-dpwnt" Nov 8 00:07:08.429401 kubelet[3181]: I1108 00:07:08.428956 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d4b3ef-ef6f-40aa-890a-556514760a53-tigera-ca-bundle\") pod \"calico-kube-controllers-74489dd677-kvxft\" (UID: \"41d4b3ef-ef6f-40aa-890a-556514760a53\") " pod="calico-system/calico-kube-controllers-74489dd677-kvxft" Nov 8 00:07:08.429558 kubelet[3181]: I1108 00:07:08.428973 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvk5\" (UniqueName: \"kubernetes.io/projected/1eab03fd-9695-41da-8445-49749eaa2864-kube-api-access-lkvk5\") pod \"calico-apiserver-d7c9d7554-7cc89\" (UID: \"1eab03fd-9695-41da-8445-49749eaa2864\") " pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" Nov 8 00:07:08.455299 containerd[1717]: time="2025-11-08T00:07:08.455246185Z" level=error msg="Failed to destroy network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.456190 containerd[1717]: time="2025-11-08T00:07:08.456002463Z" level=error msg="encountered an error cleaning up failed sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.456190 containerd[1717]: time="2025-11-08T00:07:08.456073583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jr45,Uid:70822f24-312d-4073-b204-5c6b6a26eb84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.457491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38-shm.mount: Deactivated successfully. Nov 8 00:07:08.458011 kubelet[3181]: E1108 00:07:08.457974 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.458089 kubelet[3181]: E1108 00:07:08.458038 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8jr45" Nov 8 00:07:08.458089 kubelet[3181]: E1108 00:07:08.458057 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8jr45" Nov 8 00:07:08.458142 kubelet[3181]: E1108 00:07:08.458105 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:08.529377 kubelet[3181]: I1108 00:07:08.529332 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb0c2d00-8a9d-4218-9dbc-6f07fda31565-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-9wxxn\" (UID: \"fb0c2d00-8a9d-4218-9dbc-6f07fda31565\") " pod="calico-system/goldmane-7c778bb748-9wxxn" Nov 8 00:07:08.529377 kubelet[3181]: I1108 00:07:08.529380 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-backend-key-pair\") pod \"whisker-7bfb4c996-bh8m8\" (UID: \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\") " pod="calico-system/whisker-7bfb4c996-bh8m8" Nov 8 00:07:08.529551 kubelet[3181]: I1108 00:07:08.529441 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c2d00-8a9d-4218-9dbc-6f07fda31565-config\") pod \"goldmane-7c778bb748-9wxxn\" (UID: \"fb0c2d00-8a9d-4218-9dbc-6f07fda31565\") " pod="calico-system/goldmane-7c778bb748-9wxxn" Nov 8 00:07:08.529551 kubelet[3181]: I1108 00:07:08.529480 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl695\" (UniqueName: \"kubernetes.io/projected/fb0c2d00-8a9d-4218-9dbc-6f07fda31565-kube-api-access-hl695\") pod \"goldmane-7c778bb748-9wxxn\" (UID: \"fb0c2d00-8a9d-4218-9dbc-6f07fda31565\") " pod="calico-system/goldmane-7c778bb748-9wxxn" Nov 8 00:07:08.529551 kubelet[3181]: I1108 00:07:08.529495 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkp6\" (UniqueName: \"kubernetes.io/projected/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-kube-api-access-bnkp6\") pod \"whisker-7bfb4c996-bh8m8\" (UID: \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\") " pod="calico-system/whisker-7bfb4c996-bh8m8" Nov 8 00:07:08.529551 kubelet[3181]: I1108 00:07:08.529548 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fb0c2d00-8a9d-4218-9dbc-6f07fda31565-goldmane-key-pair\") pod \"goldmane-7c778bb748-9wxxn\" (UID: \"fb0c2d00-8a9d-4218-9dbc-6f07fda31565\") " pod="calico-system/goldmane-7c778bb748-9wxxn" Nov 8 00:07:08.529649 kubelet[3181]: I1108 00:07:08.529562 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-ca-bundle\") pod \"whisker-7bfb4c996-bh8m8\" (UID: \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\") " pod="calico-system/whisker-7bfb4c996-bh8m8" Nov 8 00:07:08.607050 containerd[1717]: time="2025-11-08T00:07:08.606973788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74489dd677-kvxft,Uid:41d4b3ef-ef6f-40aa-890a-556514760a53,Namespace:calico-system,Attempt:0,}" Nov 8 00:07:08.637141 containerd[1717]: time="2025-11-08T00:07:08.637004814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpwnt,Uid:bcb0f449-555a-4f1a-a70d-fed8686a31f6,Namespace:kube-system,Attempt:0,}" Nov 8 00:07:08.643472 containerd[1717]: time="2025-11-08T00:07:08.643285762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7phdh,Uid:aad8189b-54ce-422e-a68f-46b67abadfe8,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:07:08.661825 containerd[1717]: time="2025-11-08T00:07:08.661575049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7cc89,Uid:1eab03fd-9695-41da-8445-49749eaa2864,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:07:08.675450 containerd[1717]: time="2025-11-08T00:07:08.675408024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x5nnq,Uid:b089b199-ec3e-4716-9f14-e24ffa6fbbc3,Namespace:kube-system,Attempt:0,}" Nov 8 00:07:08.689393 containerd[1717]: time="2025-11-08T00:07:08.689120399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b7fbf74-mcdn7,Uid:8dcb36b7-7066-4355-aa27-d1ae27c36df5,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:07:08.710165 containerd[1717]: time="2025-11-08T00:07:08.710011241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9wxxn,Uid:fb0c2d00-8a9d-4218-9dbc-6f07fda31565,Namespace:calico-system,Attempt:0,}" Nov 8 00:07:08.711332 containerd[1717]: time="2025-11-08T00:07:08.711275198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:07:08.711805 kubelet[3181]: I1108 00:07:08.711706 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:08.713964 containerd[1717]: time="2025-11-08T00:07:08.713843434Z" level=info msg="StopPodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\"" Nov 8 00:07:08.714292 containerd[1717]: time="2025-11-08T00:07:08.714266033Z" level=info msg="Ensure that sandbox 9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38 in task-service has been cleanup successfully" Nov 8 00:07:08.716555 containerd[1717]: time="2025-11-08T00:07:08.716394949Z" level=error msg="Failed to destroy network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.718741 containerd[1717]: time="2025-11-08T00:07:08.718583745Z" level=error msg="encountered an error cleaning up failed sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.718741 containerd[1717]: time="2025-11-08T00:07:08.718657025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74489dd677-kvxft,Uid:41d4b3ef-ef6f-40aa-890a-556514760a53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.719185 kubelet[3181]: E1108 00:07:08.718922 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.719185 kubelet[3181]: E1108 00:07:08.719105 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" Nov 8 00:07:08.719185 kubelet[3181]: E1108 00:07:08.719127 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" Nov 8 00:07:08.719311 kubelet[3181]: E1108 00:07:08.719176 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:08.721447 containerd[1717]: time="2025-11-08T00:07:08.721188980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfb4c996-bh8m8,Uid:18241a6e-9a2c-44bf-b122-af7f53eb5a3f,Namespace:calico-system,Attempt:0,}" Nov 8 00:07:08.770982 containerd[1717]: time="2025-11-08T00:07:08.770482811Z" level=error msg="StopPodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" failed" error="failed to destroy network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.772866 kubelet[3181]: E1108 00:07:08.772650 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:08.772866 kubelet[3181]: E1108 00:07:08.772731 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38"} Nov 8 00:07:08.772866 kubelet[3181]: E1108 00:07:08.772797 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70822f24-312d-4073-b204-5c6b6a26eb84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:08.772866 kubelet[3181]: E1108 00:07:08.772821 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70822f24-312d-4073-b204-5c6b6a26eb84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:08.820104 containerd[1717]: time="2025-11-08T00:07:08.820051360Z" level=error msg="Failed to destroy network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.820835 containerd[1717]: time="2025-11-08T00:07:08.820691759Z" level=error msg="encountered an error cleaning up failed sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.820835 containerd[1717]: time="2025-11-08T00:07:08.820748079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpwnt,Uid:bcb0f449-555a-4f1a-a70d-fed8686a31f6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.821637 kubelet[3181]: E1108 00:07:08.821018 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.821637 kubelet[3181]: E1108 00:07:08.821077 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dpwnt" Nov 8 00:07:08.821637 kubelet[3181]: E1108 00:07:08.821096 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dpwnt" Nov 8 00:07:08.821736 kubelet[3181]: E1108 00:07:08.821144 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dpwnt_kube-system(bcb0f449-555a-4f1a-a70d-fed8686a31f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dpwnt_kube-system(bcb0f449-555a-4f1a-a70d-fed8686a31f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dpwnt" podUID="bcb0f449-555a-4f1a-a70d-fed8686a31f6" Nov 8 00:07:08.864700 containerd[1717]: time="2025-11-08T00:07:08.864572399Z" level=error msg="Failed to destroy network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.865785 containerd[1717]: time="2025-11-08T00:07:08.865733117Z" level=error msg="encountered an error cleaning up failed sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.865875 containerd[1717]: time="2025-11-08T00:07:08.865802317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7phdh,Uid:aad8189b-54ce-422e-a68f-46b67abadfe8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.866070 kubelet[3181]: E1108 00:07:08.866031 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.866458 kubelet[3181]: E1108 00:07:08.866090 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" Nov 8 00:07:08.866458 kubelet[3181]: E1108 00:07:08.866114 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" Nov 8 00:07:08.866458 kubelet[3181]: E1108 00:07:08.866164 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:08.971742 containerd[1717]: time="2025-11-08T00:07:08.971478485Z" level=error msg="Failed to destroy network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.973546 containerd[1717]: time="2025-11-08T00:07:08.973290601Z" level=error msg="encountered an error cleaning up failed sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.973895 containerd[1717]: time="2025-11-08T00:07:08.973525961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7cc89,Uid:1eab03fd-9695-41da-8445-49749eaa2864,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.975302 kubelet[3181]: E1108 00:07:08.974322 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:08.975302 kubelet[3181]: E1108 00:07:08.974380 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" Nov 8 00:07:08.975302 kubelet[3181]: E1108 00:07:08.974405 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" Nov 8 00:07:08.975444 kubelet[3181]: E1108 00:07:08.974453 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:09.005582 containerd[1717]: time="2025-11-08T00:07:09.005524462Z" level=error msg="Failed to destroy network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.006351 containerd[1717]: time="2025-11-08T00:07:09.006252021Z" level=error msg="encountered an error cleaning up failed sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.006578 containerd[1717]: time="2025-11-08T00:07:09.006334981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b7fbf74-mcdn7,Uid:8dcb36b7-7066-4355-aa27-d1ae27c36df5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.007265 kubelet[3181]: E1108 00:07:09.006865 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.007265 kubelet[3181]: E1108 00:07:09.006928 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" Nov 8 00:07:09.007265 kubelet[3181]: E1108 00:07:09.006964 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" Nov 8 00:07:09.007531 kubelet[3181]: E1108 00:07:09.007017 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:09.011400 containerd[1717]: time="2025-11-08T00:07:09.011307932Z" level=error msg="Failed to destroy network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.011663 containerd[1717]: time="2025-11-08T00:07:09.011636811Z" level=error msg="encountered an error cleaning up failed sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.011733 containerd[1717]: time="2025-11-08T00:07:09.011693531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x5nnq,Uid:b089b199-ec3e-4716-9f14-e24ffa6fbbc3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.012966 kubelet[3181]: E1108 00:07:09.012113 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.012966 kubelet[3181]: E1108 00:07:09.012167 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x5nnq" Nov 8 00:07:09.012966 kubelet[3181]: E1108 00:07:09.012193 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x5nnq" Nov 8 00:07:09.013123 kubelet[3181]: E1108 00:07:09.012243 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x5nnq_kube-system(b089b199-ec3e-4716-9f14-e24ffa6fbbc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x5nnq_kube-system(b089b199-ec3e-4716-9f14-e24ffa6fbbc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x5nnq" podUID="b089b199-ec3e-4716-9f14-e24ffa6fbbc3" Nov 8 00:07:09.018272 containerd[1717]: time="2025-11-08T00:07:09.018218959Z" level=error msg="Failed to destroy network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.019496 containerd[1717]: time="2025-11-08T00:07:09.019210518Z" level=error msg="encountered an error cleaning up failed sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.019737 containerd[1717]: time="2025-11-08T00:07:09.019694957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9wxxn,Uid:fb0c2d00-8a9d-4218-9dbc-6f07fda31565,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.020074 kubelet[3181]: E1108 00:07:09.020035 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.020362 kubelet[3181]: E1108 00:07:09.020244 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9wxxn" Nov 8 00:07:09.020362 kubelet[3181]: E1108 00:07:09.020270 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9wxxn" Nov 8 00:07:09.020497 kubelet[3181]: E1108 00:07:09.020339 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:09.025204 containerd[1717]: time="2025-11-08T00:07:09.025083147Z" level=error msg="Failed to destroy network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.025521 containerd[1717]: time="2025-11-08T00:07:09.025484666Z" level=error msg="encountered an error cleaning up failed sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.025633 containerd[1717]: time="2025-11-08T00:07:09.025611266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfb4c996-bh8m8,Uid:18241a6e-9a2c-44bf-b122-af7f53eb5a3f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.026076 kubelet[3181]: E1108 00:07:09.025850 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.026076 kubelet[3181]: E1108 00:07:09.025894 3181 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bfb4c996-bh8m8" Nov 8 00:07:09.026076 kubelet[3181]: E1108 00:07:09.025911 3181 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bfb4c996-bh8m8" Nov 8 00:07:09.026196 kubelet[3181]: E1108 00:07:09.025969 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bfb4c996-bh8m8_calico-system(18241a6e-9a2c-44bf-b122-af7f53eb5a3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bfb4c996-bh8m8_calico-system(18241a6e-9a2c-44bf-b122-af7f53eb5a3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bfb4c996-bh8m8" podUID="18241a6e-9a2c-44bf-b122-af7f53eb5a3f" Nov 8 00:07:09.714918 kubelet[3181]: I1108 00:07:09.714884 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:09.716030 containerd[1717]: time="2025-11-08T00:07:09.715581289Z" level=info msg="StopPodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\"" Nov 8 00:07:09.716030 containerd[1717]: time="2025-11-08T00:07:09.715750089Z" level=info msg="Ensure that sandbox 49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a in task-service has been cleanup successfully" Nov 8 00:07:09.718000 kubelet[3181]: I1108 00:07:09.717964 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:09.719139 containerd[1717]: time="2025-11-08T00:07:09.718667124Z" level=info msg="StopPodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\"" Nov 8 00:07:09.719139 containerd[1717]: time="2025-11-08T00:07:09.718832843Z" level=info msg="Ensure that sandbox 03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9 in task-service has been cleanup successfully" Nov 8 00:07:09.729107 kubelet[3181]: I1108 00:07:09.729074 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:09.730651 containerd[1717]: time="2025-11-08T00:07:09.730531822Z" level=info msg="StopPodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\"" Nov 8 00:07:09.732507 containerd[1717]: time="2025-11-08T00:07:09.732473298Z" level=info msg="Ensure that sandbox 61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe in task-service has been cleanup successfully" Nov 8 00:07:09.733045 kubelet[3181]: I1108 00:07:09.732884 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:09.737995 containerd[1717]: time="2025-11-08T00:07:09.735072654Z" level=info msg="StopPodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\"" Nov 8 00:07:09.737995 containerd[1717]: time="2025-11-08T00:07:09.736831090Z" level=info msg="Ensure that sandbox ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985 in task-service has been cleanup successfully" Nov 8 00:07:09.739901 kubelet[3181]: I1108 00:07:09.739875 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:09.742305 containerd[1717]: time="2025-11-08T00:07:09.741164083Z" level=info msg="StopPodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\"" Nov 8 00:07:09.742857 containerd[1717]: time="2025-11-08T00:07:09.742403720Z" level=info msg="Ensure that sandbox b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26 in task-service has been cleanup successfully" Nov 8 00:07:09.745324 kubelet[3181]: I1108 00:07:09.745260 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:09.749491 containerd[1717]: time="2025-11-08T00:07:09.749435467Z" level=info msg="StopPodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\"" Nov 8 00:07:09.751781 containerd[1717]: time="2025-11-08T00:07:09.751722863Z" level=info msg="Ensure that sandbox fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb in task-service has been cleanup successfully" Nov 8 00:07:09.756028 kubelet[3181]: I1108 00:07:09.755999 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:09.757249 containerd[1717]: time="2025-11-08T00:07:09.757193853Z" level=info msg="StopPodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\"" Nov 8 00:07:09.757494 containerd[1717]: time="2025-11-08T00:07:09.757364773Z" level=info msg="Ensure that sandbox 8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d in task-service has been cleanup successfully" Nov 8 00:07:09.759694 kubelet[3181]: I1108 00:07:09.759534 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:09.761471 containerd[1717]: time="2025-11-08T00:07:09.761332166Z" level=info msg="StopPodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\"" Nov 8 00:07:09.762228 containerd[1717]: time="2025-11-08T00:07:09.762189724Z" level=info msg="Ensure that sandbox 45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6 in task-service has been cleanup successfully" Nov 8 00:07:09.849113 containerd[1717]: time="2025-11-08T00:07:09.849062686Z" level=error msg="StopPodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" failed" error="failed to destroy network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.849462 kubelet[3181]: E1108 00:07:09.849419 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:09.849526 kubelet[3181]: E1108 00:07:09.849472 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a"} Nov 8 00:07:09.849526 kubelet[3181]: E1108 00:07:09.849504 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b089b199-ec3e-4716-9f14-e24ffa6fbbc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.849604 kubelet[3181]: E1108 00:07:09.849530 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b089b199-ec3e-4716-9f14-e24ffa6fbbc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x5nnq" podUID="b089b199-ec3e-4716-9f14-e24ffa6fbbc3" Nov 8 00:07:09.855465 containerd[1717]: time="2025-11-08T00:07:09.855202235Z" level=error msg="StopPodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" failed" error="failed to destroy network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.855582 kubelet[3181]: E1108 00:07:09.855428 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:09.855582 kubelet[3181]: E1108 00:07:09.855471 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985"} Nov 8 00:07:09.855582 kubelet[3181]: E1108 00:07:09.855501 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.855582 kubelet[3181]: E1108 00:07:09.855526 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bfb4c996-bh8m8" podUID="18241a6e-9a2c-44bf-b122-af7f53eb5a3f" Nov 8 00:07:09.868057 containerd[1717]: time="2025-11-08T00:07:09.868008091Z" level=error msg="StopPodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" failed" error="failed to destroy network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.868299 kubelet[3181]: E1108 00:07:09.868262 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:09.868355 kubelet[3181]: E1108 00:07:09.868314 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26"} Nov 8 00:07:09.868381 kubelet[3181]: E1108 00:07:09.868354 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb0c2d00-8a9d-4218-9dbc-6f07fda31565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.868435 kubelet[3181]: E1108 00:07:09.868378 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb0c2d00-8a9d-4218-9dbc-6f07fda31565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:09.869553 containerd[1717]: time="2025-11-08T00:07:09.869481049Z" level=error msg="StopPodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" failed" error="failed to destroy network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.871045 kubelet[3181]: E1108 00:07:09.870462 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:09.871045 kubelet[3181]: E1108 00:07:09.870505 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d"} Nov 8 00:07:09.871045 kubelet[3181]: E1108 00:07:09.870531 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1eab03fd-9695-41da-8445-49749eaa2864\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.871045 kubelet[3181]: E1108 00:07:09.870551 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1eab03fd-9695-41da-8445-49749eaa2864\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:09.872841 containerd[1717]: time="2025-11-08T00:07:09.872797643Z" level=error msg="StopPodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" failed" error="failed to destroy network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.873044 kubelet[3181]: E1108 00:07:09.873014 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:09.873111 kubelet[3181]: E1108 00:07:09.873093 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9"} Nov 8 00:07:09.873143 kubelet[3181]: E1108 00:07:09.873125 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aad8189b-54ce-422e-a68f-46b67abadfe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.873377 kubelet[3181]: E1108 00:07:09.873346 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aad8189b-54ce-422e-a68f-46b67abadfe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:09.879451 containerd[1717]: time="2025-11-08T00:07:09.879382271Z" level=error msg="StopPodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" failed" error="failed to destroy network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.879679 kubelet[3181]: E1108 00:07:09.879586 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:09.879679 kubelet[3181]: E1108 00:07:09.879632 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe"} Nov 8 00:07:09.879679 kubelet[3181]: E1108 00:07:09.879660 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcb0f449-555a-4f1a-a70d-fed8686a31f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.879942 kubelet[3181]: E1108 00:07:09.879683 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcb0f449-555a-4f1a-a70d-fed8686a31f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dpwnt" podUID="bcb0f449-555a-4f1a-a70d-fed8686a31f6" Nov 8 00:07:09.883444 containerd[1717]: time="2025-11-08T00:07:09.883387223Z" level=error msg="StopPodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" failed" error="failed to destroy network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.883739 containerd[1717]: time="2025-11-08T00:07:09.883715303Z" level=error msg="StopPodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" failed" error="failed to destroy network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:07:09.883944 kubelet[3181]: E1108 00:07:09.883825 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:09.883944 kubelet[3181]: E1108 00:07:09.883873 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb"} Nov 8 00:07:09.883944 kubelet[3181]: E1108 00:07:09.883902 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8dcb36b7-7066-4355-aa27-d1ae27c36df5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.883944 kubelet[3181]: E1108 00:07:09.883929 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8dcb36b7-7066-4355-aa27-d1ae27c36df5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:09.884245 kubelet[3181]: E1108 00:07:09.883996 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:09.884245 kubelet[3181]: E1108 00:07:09.884017 3181 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6"} Nov 8 00:07:09.884245 kubelet[3181]: E1108 00:07:09.884034 3181 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41d4b3ef-ef6f-40aa-890a-556514760a53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:07:09.884245 kubelet[3181]: E1108 00:07:09.884075 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41d4b3ef-ef6f-40aa-890a-556514760a53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:13.053399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161057675.mount: Deactivated successfully. Nov 8 00:07:13.952599 containerd[1717]: time="2025-11-08T00:07:13.952541667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:13.958018 containerd[1717]: time="2025-11-08T00:07:13.957784338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:07:13.968022 containerd[1717]: time="2025-11-08T00:07:13.967963920Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:14.038130 containerd[1717]: time="2025-11-08T00:07:14.038022755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:14.038974 containerd[1717]: time="2025-11-08T00:07:14.038697234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.327382276s" Nov 8 00:07:14.038974 containerd[1717]: time="2025-11-08T00:07:14.038741554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:07:14.704467 containerd[1717]: time="2025-11-08T00:07:14.704418808Z" level=info msg="CreateContainer within sandbox \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:07:14.757889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679711266.mount: Deactivated successfully. Nov 8 00:07:14.777296 containerd[1717]: time="2025-11-08T00:07:14.777247759Z" level=info msg="CreateContainer within sandbox \"a945d1f8f04dd0176009e6e697792556dfd26f1c4d9ead482d81aef4ea7c226b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04\"" Nov 8 00:07:14.778105 containerd[1717]: time="2025-11-08T00:07:14.778079917Z" level=info msg="StartContainer for \"0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04\"" Nov 8 00:07:14.811657 systemd[1]: Started cri-containerd-0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04.scope - libcontainer container 0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04. Nov 8 00:07:14.863811 containerd[1717]: time="2025-11-08T00:07:14.863763485Z" level=info msg="StartContainer for \"0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04\" returns successfully" Nov 8 00:07:15.805454 kubelet[3181]: I1108 00:07:15.805383 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rmn8t" podStartSLOduration=2.969967189 podStartE2EDuration="17.805366328s" podCreationTimestamp="2025-11-08 00:06:58 +0000 UTC" firstStartedPulling="2025-11-08 00:06:59.204264013 +0000 UTC m=+28.748386931" lastFinishedPulling="2025-11-08 00:07:14.039663152 +0000 UTC m=+43.583786070" observedRunningTime="2025-11-08 00:07:15.803099292 +0000 UTC m=+45.347222170" watchObservedRunningTime="2025-11-08 00:07:15.805366328 +0000 UTC m=+45.349489246" Nov 8 00:07:16.073220 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:07:16.073326 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:07:16.181526 containerd[1717]: time="2025-11-08T00:07:16.181483139Z" level=info msg="StopPodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\"" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.265 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.266 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" iface="eth0" netns="/var/run/netns/cni-aa9fdf0a-b8a0-5ab3-5d5f-80706c5e53b3" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.266 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" iface="eth0" netns="/var/run/netns/cni-aa9fdf0a-b8a0-5ab3-5d5f-80706c5e53b3" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.266 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" iface="eth0" netns="/var/run/netns/cni-aa9fdf0a-b8a0-5ab3-5d5f-80706c5e53b3" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.266 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.266 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.292 [INFO][4436] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.293 [INFO][4436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.294 [INFO][4436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.302 [WARNING][4436] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.303 [INFO][4436] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.304 [INFO][4436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:16.309385 containerd[1717]: 2025-11-08 00:07:16.307 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:16.309385 containerd[1717]: time="2025-11-08T00:07:16.309215351Z" level=info msg="TearDown network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" successfully" Nov 8 00:07:16.309385 containerd[1717]: time="2025-11-08T00:07:16.309250071Z" level=info msg="StopPodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" returns successfully" Nov 8 00:07:16.312926 systemd[1]: run-netns-cni\x2daa9fdf0a\x2db8a0\x2d5ab3\x2d5d5f\x2d80706c5e53b3.mount: Deactivated successfully. Nov 8 00:07:16.377871 kubelet[3181]: I1108 00:07:16.377285 3181 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnkp6\" (UniqueName: \"kubernetes.io/projected/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-kube-api-access-bnkp6\") pod \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\" (UID: \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\") " Nov 8 00:07:16.378300 kubelet[3181]: I1108 00:07:16.378132 3181 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-ca-bundle\") pod \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\" (UID: \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\") " Nov 8 00:07:16.378300 kubelet[3181]: I1108 00:07:16.378184 3181 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-backend-key-pair\") pod \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\" (UID: \"18241a6e-9a2c-44bf-b122-af7f53eb5a3f\") " Nov 8 00:07:16.379963 kubelet[3181]: I1108 00:07:16.379248 3181 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "18241a6e-9a2c-44bf-b122-af7f53eb5a3f" (UID: "18241a6e-9a2c-44bf-b122-af7f53eb5a3f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:07:16.382317 kubelet[3181]: I1108 00:07:16.382274 3181 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "18241a6e-9a2c-44bf-b122-af7f53eb5a3f" (UID: "18241a6e-9a2c-44bf-b122-af7f53eb5a3f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:07:16.382456 kubelet[3181]: I1108 00:07:16.382434 3181 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-kube-api-access-bnkp6" (OuterVolumeSpecName: "kube-api-access-bnkp6") pod "18241a6e-9a2c-44bf-b122-af7f53eb5a3f" (UID: "18241a6e-9a2c-44bf-b122-af7f53eb5a3f"). InnerVolumeSpecName "kube-api-access-bnkp6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:07:16.384675 systemd[1]: var-lib-kubelet-pods-18241a6e\x2d9a2c\x2d44bf\x2db122\x2daf7f53eb5a3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbnkp6.mount: Deactivated successfully. Nov 8 00:07:16.391809 systemd[1]: var-lib-kubelet-pods-18241a6e\x2d9a2c\x2d44bf\x2db122\x2daf7f53eb5a3f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:07:16.478595 kubelet[3181]: I1108 00:07:16.478433 3181 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-5561f33395\" DevicePath \"\"" Nov 8 00:07:16.478595 kubelet[3181]: I1108 00:07:16.478492 3181 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bnkp6\" (UniqueName: \"kubernetes.io/projected/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-kube-api-access-bnkp6\") on node \"ci-4081.3.6-n-5561f33395\" DevicePath \"\"" Nov 8 00:07:16.478595 kubelet[3181]: I1108 00:07:16.478503 3181 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18241a6e-9a2c-44bf-b122-af7f53eb5a3f-whisker-ca-bundle\") on node \"ci-4081.3.6-n-5561f33395\" DevicePath \"\"" Nov 8 00:07:16.584351 systemd[1]: Removed slice kubepods-besteffort-pod18241a6e_9a2c_44bf_b122_af7f53eb5a3f.slice - libcontainer container kubepods-besteffort-pod18241a6e_9a2c_44bf_b122_af7f53eb5a3f.slice. Nov 8 00:07:16.859560 systemd[1]: Created slice kubepods-besteffort-pod967f9b6c_67db_4dea_be69_0b8cc8010676.slice - libcontainer container kubepods-besteffort-pod967f9b6c_67db_4dea_be69_0b8cc8010676.slice. Nov 8 00:07:16.880613 kubelet[3181]: I1108 00:07:16.880512 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/967f9b6c-67db-4dea-be69-0b8cc8010676-whisker-backend-key-pair\") pod \"whisker-764b4db9fd-g5pz9\" (UID: \"967f9b6c-67db-4dea-be69-0b8cc8010676\") " pod="calico-system/whisker-764b4db9fd-g5pz9" Nov 8 00:07:16.880613 kubelet[3181]: I1108 00:07:16.880558 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967f9b6c-67db-4dea-be69-0b8cc8010676-whisker-ca-bundle\") pod \"whisker-764b4db9fd-g5pz9\" (UID: \"967f9b6c-67db-4dea-be69-0b8cc8010676\") " pod="calico-system/whisker-764b4db9fd-g5pz9" Nov 8 00:07:16.880613 kubelet[3181]: I1108 00:07:16.880576 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9mp\" (UniqueName: \"kubernetes.io/projected/967f9b6c-67db-4dea-be69-0b8cc8010676-kube-api-access-nz9mp\") pod \"whisker-764b4db9fd-g5pz9\" (UID: \"967f9b6c-67db-4dea-be69-0b8cc8010676\") " pod="calico-system/whisker-764b4db9fd-g5pz9" Nov 8 00:07:17.169980 containerd[1717]: time="2025-11-08T00:07:17.169863979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764b4db9fd-g5pz9,Uid:967f9b6c-67db-4dea-be69-0b8cc8010676,Namespace:calico-system,Attempt:0,}" Nov 8 00:07:17.384125 systemd-networkd[1359]: calidc185927edd: Link UP Nov 8 00:07:17.384373 systemd-networkd[1359]: calidc185927edd: Gained carrier Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.232 [INFO][4459] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.245 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0 whisker-764b4db9fd- calico-system 967f9b6c-67db-4dea-be69-0b8cc8010676 919 0 2025-11-08 00:07:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:764b4db9fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 whisker-764b4db9fd-g5pz9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidc185927edd [] [] }} ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.245 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.267 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" HandleID="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.267 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" HandleID="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-5561f33395", "pod":"whisker-764b4db9fd-g5pz9", "timestamp":"2025-11-08 00:07:17.266994486 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.267 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.267 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.267 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.276 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.280 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.284 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.286 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.288 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.288 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.289 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.298 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.304 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.193/26] block=192.168.38.192/26 handle="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.304 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.193/26] handle="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.304 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:17.402926 containerd[1717]: 2025-11-08 00:07:17.304 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.193/26] IPv6=[] ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" HandleID="k8s-pod-network.a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.404516 containerd[1717]: 2025-11-08 00:07:17.307 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0", GenerateName:"whisker-764b4db9fd-", Namespace:"calico-system", SelfLink:"", UID:"967f9b6c-67db-4dea-be69-0b8cc8010676", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 7, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"764b4db9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"whisker-764b4db9fd-g5pz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidc185927edd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:17.404516 containerd[1717]: 2025-11-08 00:07:17.307 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.193/32] ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.404516 containerd[1717]: 2025-11-08 00:07:17.307 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc185927edd ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.404516 containerd[1717]: 2025-11-08 00:07:17.381 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.404516 containerd[1717]: 2025-11-08 00:07:17.381 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0", GenerateName:"whisker-764b4db9fd-", Namespace:"calico-system", SelfLink:"", UID:"967f9b6c-67db-4dea-be69-0b8cc8010676", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 7, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"764b4db9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe", Pod:"whisker-764b4db9fd-g5pz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidc185927edd", MAC:"de:69:25:94:d7:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:17.404516 containerd[1717]: 2025-11-08 00:07:17.395 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe" Namespace="calico-system" Pod="whisker-764b4db9fd-g5pz9" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--764b4db9fd--g5pz9-eth0" Nov 8 00:07:17.420832 containerd[1717]: time="2025-11-08T00:07:17.420354853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:17.420832 containerd[1717]: time="2025-11-08T00:07:17.420413253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:17.420832 containerd[1717]: time="2025-11-08T00:07:17.420424173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:17.420832 containerd[1717]: time="2025-11-08T00:07:17.420501852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:17.439174 systemd[1]: Started cri-containerd-a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe.scope - libcontainer container a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe. Nov 8 00:07:17.471705 containerd[1717]: time="2025-11-08T00:07:17.471602881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764b4db9fd-g5pz9,Uid:967f9b6c-67db-4dea-be69-0b8cc8010676,Namespace:calico-system,Attempt:0,} returns sandbox id \"a862c73a6b72d4151030d223bebdc562a888b2e9984baa4902f1728360800dfe\"" Nov 8 00:07:17.474955 containerd[1717]: time="2025-11-08T00:07:17.473912757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:17.764748 containerd[1717]: time="2025-11-08T00:07:17.764136561Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:17.767714 containerd[1717]: time="2025-11-08T00:07:17.767599834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:17.767714 containerd[1717]: time="2025-11-08T00:07:17.767674314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:17.767889 kubelet[3181]: E1108 00:07:17.767834 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:17.767889 kubelet[3181]: E1108 00:07:17.767879 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:17.769660 kubelet[3181]: E1108 00:07:17.769615 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:17.770908 containerd[1717]: time="2025-11-08T00:07:17.770791589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:17.852005 kernel: bpftool[4644]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:07:18.070532 containerd[1717]: time="2025-11-08T00:07:18.070421215Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:18.074246 containerd[1717]: time="2025-11-08T00:07:18.074203449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:18.074530 containerd[1717]: time="2025-11-08T00:07:18.074303528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:18.074693 kubelet[3181]: E1108 00:07:18.074642 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:18.074976 kubelet[3181]: E1108 00:07:18.074702 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:18.075169 kubelet[3181]: E1108 00:07:18.074786 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:18.075241 kubelet[3181]: E1108 00:07:18.075209 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:07:18.312207 systemd-networkd[1359]: vxlan.calico: Link UP Nov 8 00:07:18.312215 systemd-networkd[1359]: vxlan.calico: Gained carrier Nov 8 00:07:18.579471 kubelet[3181]: I1108 00:07:18.579431 3181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18241a6e-9a2c-44bf-b122-af7f53eb5a3f" path="/var/lib/kubelet/pods/18241a6e-9a2c-44bf-b122-af7f53eb5a3f/volumes" Nov 8 00:07:18.786617 kubelet[3181]: E1108 00:07:18.786548 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:07:19.184134 systemd-networkd[1359]: calidc185927edd: Gained IPv6LL Nov 8 00:07:19.504152 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Nov 8 00:07:21.577752 containerd[1717]: time="2025-11-08T00:07:21.577609383Z" level=info msg="StopPodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\"" Nov 8 00:07:21.579038 containerd[1717]: time="2025-11-08T00:07:21.577609823Z" level=info msg="StopPodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\"" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.645 [INFO][4745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.645 [INFO][4745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" iface="eth0" netns="/var/run/netns/cni-b78ef48f-5aa3-4b9f-73ca-e661e8a784f5" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.646 [INFO][4745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" iface="eth0" netns="/var/run/netns/cni-b78ef48f-5aa3-4b9f-73ca-e661e8a784f5" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.647 [INFO][4745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" iface="eth0" netns="/var/run/netns/cni-b78ef48f-5aa3-4b9f-73ca-e661e8a784f5" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.647 [INFO][4745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.647 [INFO][4745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.673 [INFO][4758] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.673 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.673 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.684 [WARNING][4758] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.684 [INFO][4758] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.685 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:21.688918 containerd[1717]: 2025-11-08 00:07:21.687 [INFO][4745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:21.691240 containerd[1717]: time="2025-11-08T00:07:21.691122580Z" level=info msg="TearDown network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" successfully" Nov 8 00:07:21.691390 containerd[1717]: time="2025-11-08T00:07:21.691372259Z" level=info msg="StopPodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" returns successfully" Nov 8 00:07:21.692341 systemd[1]: run-netns-cni\x2db78ef48f\x2d5aa3\x2d4b9f\x2d73ca\x2de661e8a784f5.mount: Deactivated successfully. Nov 8 00:07:21.703383 containerd[1717]: time="2025-11-08T00:07:21.703192358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpwnt,Uid:bcb0f449-555a-4f1a-a70d-fed8686a31f6,Namespace:kube-system,Attempt:1,}" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.652 [INFO][4749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.652 [INFO][4749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" iface="eth0" netns="/var/run/netns/cni-badd9586-0c5c-0c4d-87e6-f5ebaca626af" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.652 [INFO][4749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" iface="eth0" netns="/var/run/netns/cni-badd9586-0c5c-0c4d-87e6-f5ebaca626af" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.652 [INFO][4749] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" iface="eth0" netns="/var/run/netns/cni-badd9586-0c5c-0c4d-87e6-f5ebaca626af" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.652 [INFO][4749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.652 [INFO][4749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.688 [INFO][4763] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.691 [INFO][4763] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.693 [INFO][4763] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.703 [WARNING][4763] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.703 [INFO][4763] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.705 [INFO][4763] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:21.708399 containerd[1717]: 2025-11-08 00:07:21.706 [INFO][4749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:21.709905 containerd[1717]: time="2025-11-08T00:07:21.709144107Z" level=info msg="TearDown network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" successfully" Nov 8 00:07:21.709905 containerd[1717]: time="2025-11-08T00:07:21.709174547Z" level=info msg="StopPodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" returns successfully" Nov 8 00:07:21.712101 systemd[1]: run-netns-cni\x2dbadd9586\x2d0c5c\x2d0c4d\x2d87e6\x2df5ebaca626af.mount: Deactivated successfully. Nov 8 00:07:21.727213 containerd[1717]: time="2025-11-08T00:07:21.726843756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x5nnq,Uid:b089b199-ec3e-4716-9f14-e24ffa6fbbc3,Namespace:kube-system,Attempt:1,}" Nov 8 00:07:21.916632 systemd-networkd[1359]: califa3b4eb6309: Link UP Nov 8 00:07:21.923011 systemd-networkd[1359]: califa3b4eb6309: Gained carrier Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.831 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0 coredns-66bc5c9577- kube-system b089b199-ec3e-4716-9f14-e24ffa6fbbc3 949 0 2025-11-08 00:06:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 coredns-66bc5c9577-x5nnq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa3b4eb6309 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.831 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.866 [INFO][4795] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" HandleID="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.866 [INFO][4795] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" HandleID="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-5561f33395", "pod":"coredns-66bc5c9577-x5nnq", "timestamp":"2025-11-08 00:07:21.866196506 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.866 [INFO][4795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.866 [INFO][4795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.866 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.878 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.882 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.888 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.890 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.892 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.892 [INFO][4795] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.893 [INFO][4795] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13 Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.903 [INFO][4795] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.908 [INFO][4795] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.194/26] block=192.168.38.192/26 handle="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.909 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.194/26] handle="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.909 [INFO][4795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:21.949847 containerd[1717]: 2025-11-08 00:07:21.909 [INFO][4795] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.194/26] IPv6=[] ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" HandleID="k8s-pod-network.a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.950414 containerd[1717]: 2025-11-08 00:07:21.912 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b089b199-ec3e-4716-9f14-e24ffa6fbbc3", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"coredns-66bc5c9577-x5nnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa3b4eb6309", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:21.950414 containerd[1717]: 2025-11-08 00:07:21.912 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.194/32] ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.950414 containerd[1717]: 2025-11-08 00:07:21.912 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa3b4eb6309 ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.950414 containerd[1717]: 2025-11-08 00:07:21.929 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.950414 containerd[1717]: 2025-11-08 00:07:21.931 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b089b199-ec3e-4716-9f14-e24ffa6fbbc3", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13", Pod:"coredns-66bc5c9577-x5nnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa3b4eb6309", MAC:"da:b5:93:c8:f4:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:21.950593 containerd[1717]: 2025-11-08 00:07:21.948 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13" Namespace="kube-system" Pod="coredns-66bc5c9577-x5nnq" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:21.973795 containerd[1717]: time="2025-11-08T00:07:21.973364234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:21.973795 containerd[1717]: time="2025-11-08T00:07:21.973459074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:21.973795 containerd[1717]: time="2025-11-08T00:07:21.973486674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:21.973795 containerd[1717]: time="2025-11-08T00:07:21.973630474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:21.999757 systemd[1]: Started cri-containerd-a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13.scope - libcontainer container a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13. Nov 8 00:07:22.044688 systemd-networkd[1359]: calic5c6b9bba71: Link UP Nov 8 00:07:22.046483 systemd-networkd[1359]: calic5c6b9bba71: Gained carrier Nov 8 00:07:22.054003 containerd[1717]: time="2025-11-08T00:07:22.053640491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x5nnq,Uid:b089b199-ec3e-4716-9f14-e24ffa6fbbc3,Namespace:kube-system,Attempt:1,} returns sandbox id \"a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13\"" Nov 8 00:07:22.071662 containerd[1717]: time="2025-11-08T00:07:22.071620459Z" level=info msg="CreateContainer within sandbox \"a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.832 [INFO][4771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0 coredns-66bc5c9577- kube-system bcb0f449-555a-4f1a-a70d-fed8686a31f6 948 0 2025-11-08 00:06:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 coredns-66bc5c9577-dpwnt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic5c6b9bba71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.832 [INFO][4771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.868 [INFO][4800] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" HandleID="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.868 [INFO][4800] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" HandleID="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-5561f33395", "pod":"coredns-66bc5c9577-dpwnt", "timestamp":"2025-11-08 00:07:21.868290062 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.868 [INFO][4800] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.910 [INFO][4800] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.910 [INFO][4800] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.980 [INFO][4800] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:21.989 [INFO][4800] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.002 [INFO][4800] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.005 [INFO][4800] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.008 [INFO][4800] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.008 [INFO][4800] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.009 [INFO][4800] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.019 [INFO][4800] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.038 [INFO][4800] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.195/26] block=192.168.38.192/26 handle="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.038 [INFO][4800] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.195/26] handle="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.038 [INFO][4800] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:22.080066 containerd[1717]: 2025-11-08 00:07:22.038 [INFO][4800] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.195/26] IPv6=[] ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" HandleID="k8s-pod-network.183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.080608 containerd[1717]: 2025-11-08 00:07:22.040 [INFO][4771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcb0f449-555a-4f1a-a70d-fed8686a31f6", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"coredns-66bc5c9577-dpwnt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c6b9bba71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:22.080608 containerd[1717]: 2025-11-08 00:07:22.041 [INFO][4771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.195/32] ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.080608 containerd[1717]: 2025-11-08 00:07:22.041 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5c6b9bba71 ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.080608 containerd[1717]: 2025-11-08 00:07:22.047 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.080608 containerd[1717]: 2025-11-08 00:07:22.049 [INFO][4771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcb0f449-555a-4f1a-a70d-fed8686a31f6", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a", Pod:"coredns-66bc5c9577-dpwnt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c6b9bba71", MAC:"06:86:56:8b:bd:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:22.080777 containerd[1717]: 2025-11-08 00:07:22.075 [INFO][4771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a" Namespace="kube-system" Pod="coredns-66bc5c9577-dpwnt" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:22.109247 containerd[1717]: time="2025-11-08T00:07:22.108172553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:22.109247 containerd[1717]: time="2025-11-08T00:07:22.108735352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:22.109247 containerd[1717]: time="2025-11-08T00:07:22.108748752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:22.109247 containerd[1717]: time="2025-11-08T00:07:22.108976232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:22.127165 systemd[1]: Started cri-containerd-183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a.scope - libcontainer container 183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a. Nov 8 00:07:22.158280 containerd[1717]: time="2025-11-08T00:07:22.158149664Z" level=info msg="CreateContainer within sandbox \"a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4242fb683035662451404e28e26e87c0e1ae9e11d2800a760ba07d8464ef1881\"" Nov 8 00:07:22.160062 containerd[1717]: time="2025-11-08T00:07:22.159760101Z" level=info msg="StartContainer for \"4242fb683035662451404e28e26e87c0e1ae9e11d2800a760ba07d8464ef1881\"" Nov 8 00:07:22.176718 containerd[1717]: time="2025-11-08T00:07:22.175393553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dpwnt,Uid:bcb0f449-555a-4f1a-a70d-fed8686a31f6,Namespace:kube-system,Attempt:1,} returns sandbox id \"183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a\"" Nov 8 00:07:22.197774 containerd[1717]: time="2025-11-08T00:07:22.197625113Z" level=info msg="CreateContainer within sandbox \"183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:07:22.199922 systemd[1]: Started cri-containerd-4242fb683035662451404e28e26e87c0e1ae9e11d2800a760ba07d8464ef1881.scope - libcontainer container 4242fb683035662451404e28e26e87c0e1ae9e11d2800a760ba07d8464ef1881. Nov 8 00:07:22.234580 containerd[1717]: time="2025-11-08T00:07:22.233865848Z" level=info msg="StartContainer for \"4242fb683035662451404e28e26e87c0e1ae9e11d2800a760ba07d8464ef1881\" returns successfully" Nov 8 00:07:22.244884 containerd[1717]: time="2025-11-08T00:07:22.244829188Z" level=info msg="CreateContainer within sandbox \"183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08b350bd25a18678e1f09fea297cb22275a028ee7fe420118ca3e5d02292f723\"" Nov 8 00:07:22.246777 containerd[1717]: time="2025-11-08T00:07:22.246749745Z" level=info msg="StartContainer for \"08b350bd25a18678e1f09fea297cb22275a028ee7fe420118ca3e5d02292f723\"" Nov 8 00:07:22.284320 systemd[1]: Started cri-containerd-08b350bd25a18678e1f09fea297cb22275a028ee7fe420118ca3e5d02292f723.scope - libcontainer container 08b350bd25a18678e1f09fea297cb22275a028ee7fe420118ca3e5d02292f723. Nov 8 00:07:22.317165 containerd[1717]: time="2025-11-08T00:07:22.317038819Z" level=info msg="StartContainer for \"08b350bd25a18678e1f09fea297cb22275a028ee7fe420118ca3e5d02292f723\" returns successfully" Nov 8 00:07:22.579451 containerd[1717]: time="2025-11-08T00:07:22.579102230Z" level=info msg="StopPodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\"" Nov 8 00:07:22.587251 containerd[1717]: time="2025-11-08T00:07:22.586992856Z" level=info msg="StopPodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\"" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.672 [INFO][5007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.673 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" iface="eth0" netns="/var/run/netns/cni-a0c18bf1-22ff-09c0-c774-52d4ed99ec85" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.673 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" iface="eth0" netns="/var/run/netns/cni-a0c18bf1-22ff-09c0-c774-52d4ed99ec85" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.673 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" iface="eth0" netns="/var/run/netns/cni-a0c18bf1-22ff-09c0-c774-52d4ed99ec85" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.673 [INFO][5007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.673 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.711 [INFO][5017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.711 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.711 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.720 [WARNING][5017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.720 [INFO][5017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.722 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:22.726015 containerd[1717]: 2025-11-08 00:07:22.723 [INFO][5007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:22.728109 containerd[1717]: time="2025-11-08T00:07:22.728062963Z" level=info msg="TearDown network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" successfully" Nov 8 00:07:22.728109 containerd[1717]: time="2025-11-08T00:07:22.728101083Z" level=info msg="StopPodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" returns successfully" Nov 8 00:07:22.730855 systemd[1]: run-netns-cni\x2da0c18bf1\x2d22ff\x2d09c0\x2dc774\x2d52d4ed99ec85.mount: Deactivated successfully. Nov 8 00:07:22.738703 containerd[1717]: time="2025-11-08T00:07:22.738665264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7phdh,Uid:aad8189b-54ce-422e-a68f-46b67abadfe8,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.681 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.682 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" iface="eth0" netns="/var/run/netns/cni-1821ddb7-df36-76da-c0d3-4d84cc509b24" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.682 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" iface="eth0" netns="/var/run/netns/cni-1821ddb7-df36-76da-c0d3-4d84cc509b24" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.684 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" iface="eth0" netns="/var/run/netns/cni-1821ddb7-df36-76da-c0d3-4d84cc509b24" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.686 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.686 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.717 [INFO][5022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.717 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.722 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.739 [WARNING][5022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.739 [INFO][5022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.740 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:22.744315 containerd[1717]: 2025-11-08 00:07:22.742 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:22.744779 containerd[1717]: time="2025-11-08T00:07:22.744428734Z" level=info msg="TearDown network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" successfully" Nov 8 00:07:22.744779 containerd[1717]: time="2025-11-08T00:07:22.744452054Z" level=info msg="StopPodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" returns successfully" Nov 8 00:07:22.748561 systemd[1]: run-netns-cni\x2d1821ddb7\x2ddf36\x2d76da\x2dc0d3\x2d4d84cc509b24.mount: Deactivated successfully. Nov 8 00:07:22.751069 containerd[1717]: time="2025-11-08T00:07:22.750724643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jr45,Uid:70822f24-312d-4073-b204-5c6b6a26eb84,Namespace:calico-system,Attempt:1,}" Nov 8 00:07:22.845003 kubelet[3181]: I1108 00:07:22.844119 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x5nnq" podStartSLOduration=46.844102196 podStartE2EDuration="46.844102196s" podCreationTimestamp="2025-11-08 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:07:22.843002718 +0000 UTC m=+52.387125636" watchObservedRunningTime="2025-11-08 00:07:22.844102196 +0000 UTC m=+52.388225114" Nov 8 00:07:23.089660 systemd-networkd[1359]: cali6f95b4d25ba: Link UP Nov 8 00:07:23.089805 systemd-networkd[1359]: cali6f95b4d25ba: Gained carrier Nov 8 00:07:23.113401 kubelet[3181]: I1108 00:07:23.112751 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dpwnt" podStartSLOduration=47.112729515 podStartE2EDuration="47.112729515s" podCreationTimestamp="2025-11-08 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:07:22.944928375 +0000 UTC m=+52.489051293" watchObservedRunningTime="2025-11-08 00:07:23.112729515 +0000 UTC m=+52.656852433" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:22.919 [INFO][5040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0 calico-apiserver-d7c9d7554- calico-apiserver aad8189b-54ce-422e-a68f-46b67abadfe8 968 0 2025-11-08 00:06:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d7c9d7554 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 calico-apiserver-d7c9d7554-7phdh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f95b4d25ba [] [] }} ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:22.919 [INFO][5040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.011 [INFO][5062] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" HandleID="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.013 [INFO][5062] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" HandleID="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-5561f33395", "pod":"calico-apiserver-d7c9d7554-7phdh", "timestamp":"2025-11-08 00:07:23.011169657 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.013 [INFO][5062] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.013 [INFO][5062] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.013 [INFO][5062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.030 [INFO][5062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.034 [INFO][5062] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.043 [INFO][5062] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.048 [INFO][5062] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.050 [INFO][5062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.050 [INFO][5062] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.057 [INFO][5062] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57 Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.069 [INFO][5062] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.083 [INFO][5062] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.196/26] block=192.168.38.192/26 handle="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.083 [INFO][5062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.196/26] handle="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.083 [INFO][5062] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:23.114874 containerd[1717]: 2025-11-08 00:07:23.083 [INFO][5062] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.196/26] IPv6=[] ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" HandleID="k8s-pod-network.0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.117068 containerd[1717]: 2025-11-08 00:07:23.086 [INFO][5040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"aad8189b-54ce-422e-a68f-46b67abadfe8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"calico-apiserver-d7c9d7554-7phdh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f95b4d25ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:23.117068 containerd[1717]: 2025-11-08 00:07:23.086 [INFO][5040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.196/32] ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.117068 containerd[1717]: 2025-11-08 00:07:23.086 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f95b4d25ba ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.117068 containerd[1717]: 2025-11-08 00:07:23.089 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.117068 containerd[1717]: 2025-11-08 00:07:23.089 [INFO][5040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"aad8189b-54ce-422e-a68f-46b67abadfe8", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57", Pod:"calico-apiserver-d7c9d7554-7phdh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f95b4d25ba", MAC:"8a:09:81:9c:be:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:23.117068 containerd[1717]: 2025-11-08 00:07:23.111 [INFO][5040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7phdh" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:23.148255 containerd[1717]: time="2025-11-08T00:07:23.147333373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:23.149018 containerd[1717]: time="2025-11-08T00:07:23.148179411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:23.149018 containerd[1717]: time="2025-11-08T00:07:23.148212131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:23.149018 containerd[1717]: time="2025-11-08T00:07:23.148546771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:23.152079 systemd-networkd[1359]: califa3b4eb6309: Gained IPv6LL Nov 8 00:07:23.177629 systemd[1]: Started cri-containerd-0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57.scope - libcontainer container 0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57. Nov 8 00:07:23.209605 systemd-networkd[1359]: cali1b9ebf29710: Link UP Nov 8 00:07:23.210250 systemd-networkd[1359]: cali1b9ebf29710: Gained carrier Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:22.906 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0 csi-node-driver- calico-system 70822f24-312d-4073-b204-5c6b6a26eb84 969 0 2025-11-08 00:06:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 csi-node-driver-8jr45 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1b9ebf29710 [] [] }} ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:22.908 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.016 [INFO][5057] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" HandleID="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.016 [INFO][5057] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" HandleID="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-5561f33395", "pod":"csi-node-driver-8jr45", "timestamp":"2025-11-08 00:07:23.016687487 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.016 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.083 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.083 [INFO][5057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.130 [INFO][5057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.137 [INFO][5057] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.145 [INFO][5057] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.148 [INFO][5057] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.152 [INFO][5057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.153 [INFO][5057] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.155 [INFO][5057] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326 Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.171 [INFO][5057] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.186 [INFO][5057] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.197/26] block=192.168.38.192/26 handle="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.187 [INFO][5057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.197/26] handle="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.188 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:23.235895 containerd[1717]: 2025-11-08 00:07:23.188 [INFO][5057] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.197/26] IPv6=[] ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" HandleID="k8s-pod-network.ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.236785 containerd[1717]: 2025-11-08 00:07:23.195 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70822f24-312d-4073-b204-5c6b6a26eb84", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"csi-node-driver-8jr45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b9ebf29710", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:23.236785 containerd[1717]: 2025-11-08 00:07:23.195 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.197/32] ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.236785 containerd[1717]: 2025-11-08 00:07:23.195 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b9ebf29710 ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.236785 containerd[1717]: 2025-11-08 00:07:23.209 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.236785 containerd[1717]: 2025-11-08 00:07:23.213 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70822f24-312d-4073-b204-5c6b6a26eb84", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326", Pod:"csi-node-driver-8jr45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b9ebf29710", MAC:"96:bc:9c:7a:e9:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:23.236785 containerd[1717]: 2025-11-08 00:07:23.231 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326" Namespace="calico-system" Pod="csi-node-driver-8jr45" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:23.276817 containerd[1717]: time="2025-11-08T00:07:23.276436982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7phdh,Uid:aad8189b-54ce-422e-a68f-46b67abadfe8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57\"" Nov 8 00:07:23.277617 containerd[1717]: time="2025-11-08T00:07:23.276879781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:23.277617 containerd[1717]: time="2025-11-08T00:07:23.276956341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:23.277617 containerd[1717]: time="2025-11-08T00:07:23.276971381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:23.279435 containerd[1717]: time="2025-11-08T00:07:23.279250057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:23.281254 containerd[1717]: time="2025-11-08T00:07:23.280213735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:23.300546 systemd[1]: Started cri-containerd-ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326.scope - libcontainer container ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326. Nov 8 00:07:23.326265 containerd[1717]: time="2025-11-08T00:07:23.326218053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jr45,Uid:70822f24-312d-4073-b204-5c6b6a26eb84,Namespace:calico-system,Attempt:1,} returns sandbox id \"ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326\"" Nov 8 00:07:23.595571 containerd[1717]: time="2025-11-08T00:07:23.595458011Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:23.599411 containerd[1717]: time="2025-11-08T00:07:23.599293844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:23.599411 containerd[1717]: time="2025-11-08T00:07:23.599366724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:23.599588 kubelet[3181]: E1108 00:07:23.599528 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:23.599656 kubelet[3181]: E1108 00:07:23.599626 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:23.599794 kubelet[3181]: E1108 00:07:23.599771 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:23.599830 kubelet[3181]: E1108 00:07:23.599811 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:23.601557 containerd[1717]: time="2025-11-08T00:07:23.601263880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:07:23.829143 kubelet[3181]: E1108 00:07:23.828695 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:23.873227 containerd[1717]: time="2025-11-08T00:07:23.873049394Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:23.877071 containerd[1717]: time="2025-11-08T00:07:23.877010947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:07:23.877220 containerd[1717]: time="2025-11-08T00:07:23.877049067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:07:23.877322 kubelet[3181]: E1108 00:07:23.877283 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:23.877617 kubelet[3181]: E1108 00:07:23.877329 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:23.877617 kubelet[3181]: E1108 00:07:23.877399 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:23.879234 containerd[1717]: time="2025-11-08T00:07:23.879175743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:07:24.112063 systemd-networkd[1359]: calic5c6b9bba71: Gained IPv6LL Nov 8 00:07:24.152873 containerd[1717]: time="2025-11-08T00:07:24.152730053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:24.156533 containerd[1717]: time="2025-11-08T00:07:24.156420967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:07:24.156533 containerd[1717]: time="2025-11-08T00:07:24.156496287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:07:24.156709 kubelet[3181]: E1108 00:07:24.156662 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:24.156748 kubelet[3181]: E1108 00:07:24.156708 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:24.156803 kubelet[3181]: E1108 00:07:24.156774 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:24.156873 kubelet[3181]: E1108 00:07:24.156821 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:24.240140 systemd-networkd[1359]: cali6f95b4d25ba: Gained IPv6LL Nov 8 00:07:24.528877 kubelet[3181]: I1108 00:07:24.528720 3181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:07:24.560067 systemd-networkd[1359]: cali1b9ebf29710: Gained IPv6LL Nov 8 00:07:24.579822 containerd[1717]: time="2025-11-08T00:07:24.578061732Z" level=info msg="StopPodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\"" Nov 8 00:07:24.581727 containerd[1717]: time="2025-11-08T00:07:24.581673645Z" level=info msg="StopPodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\"" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.709 [INFO][5220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.709 [INFO][5220] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" iface="eth0" netns="/var/run/netns/cni-d8f8465b-622f-d246-aaa7-d0d8b1d447ed" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.709 [INFO][5220] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" iface="eth0" netns="/var/run/netns/cni-d8f8465b-622f-d246-aaa7-d0d8b1d447ed" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.710 [INFO][5220] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" iface="eth0" netns="/var/run/netns/cni-d8f8465b-622f-d246-aaa7-d0d8b1d447ed" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.710 [INFO][5220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.710 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.744 [INFO][5253] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.744 [INFO][5253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.744 [INFO][5253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.760 [WARNING][5253] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.760 [INFO][5253] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.764 [INFO][5253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:24.772307 containerd[1717]: 2025-11-08 00:07:24.768 [INFO][5220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:24.776663 systemd[1]: run-netns-cni\x2dd8f8465b\x2d622f\x2dd246\x2daaa7\x2dd0d8b1d447ed.mount: Deactivated successfully. Nov 8 00:07:24.779322 containerd[1717]: time="2025-11-08T00:07:24.778659493Z" level=info msg="TearDown network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" successfully" Nov 8 00:07:24.779322 containerd[1717]: time="2025-11-08T00:07:24.778708933Z" level=info msg="StopPodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" returns successfully" Nov 8 00:07:24.787718 containerd[1717]: time="2025-11-08T00:07:24.786645359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9wxxn,Uid:fb0c2d00-8a9d-4218-9dbc-6f07fda31565,Namespace:calico-system,Attempt:1,}" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.714 [INFO][5221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.714 [INFO][5221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" iface="eth0" netns="/var/run/netns/cni-9b3b7852-1d65-401d-91aa-8b255791c578" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.714 [INFO][5221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" iface="eth0" netns="/var/run/netns/cni-9b3b7852-1d65-401d-91aa-8b255791c578" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.715 [INFO][5221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" iface="eth0" netns="/var/run/netns/cni-9b3b7852-1d65-401d-91aa-8b255791c578" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.715 [INFO][5221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.715 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.766 [INFO][5258] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.767 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.767 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.787 [WARNING][5258] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.787 [INFO][5258] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.790 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:24.797223 containerd[1717]: 2025-11-08 00:07:24.795 [INFO][5221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:24.798471 containerd[1717]: time="2025-11-08T00:07:24.798269378Z" level=info msg="TearDown network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" successfully" Nov 8 00:07:24.798471 containerd[1717]: time="2025-11-08T00:07:24.798314378Z" level=info msg="StopPodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" returns successfully" Nov 8 00:07:24.803633 systemd[1]: run-netns-cni\x2d9b3b7852\x2d1d65\x2d401d\x2d91aa\x2d8b255791c578.mount: Deactivated successfully. Nov 8 00:07:24.807816 containerd[1717]: time="2025-11-08T00:07:24.807773041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b7fbf74-mcdn7,Uid:8dcb36b7-7066-4355-aa27-d1ae27c36df5,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:07:24.848507 kubelet[3181]: E1108 00:07:24.845599 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:24.857477 kubelet[3181]: E1108 00:07:24.854411 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:25.070267 systemd-networkd[1359]: cali304e0432b93: Link UP Nov 8 00:07:25.071548 systemd-networkd[1359]: cali304e0432b93: Gained carrier Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:24.965 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0 calico-apiserver-847b7fbf74- calico-apiserver 8dcb36b7-7066-4355-aa27-d1ae27c36df5 1009 0 2025-11-08 00:06:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847b7fbf74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 calico-apiserver-847b7fbf74-mcdn7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali304e0432b93 [] [] }} ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:24.965 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.002 [INFO][5296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" HandleID="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.002 [INFO][5296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" HandleID="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-5561f33395", "pod":"calico-apiserver-847b7fbf74-mcdn7", "timestamp":"2025-11-08 00:07:25.002350292 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.002 [INFO][5296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.002 [INFO][5296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.002 [INFO][5296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.021 [INFO][5296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.026 [INFO][5296] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.030 [INFO][5296] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.032 [INFO][5296] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.038 [INFO][5296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.038 [INFO][5296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.040 [INFO][5296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.048 [INFO][5296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.060 [INFO][5296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.198/26] block=192.168.38.192/26 handle="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.061 [INFO][5296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.198/26] handle="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.061 [INFO][5296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:25.094068 containerd[1717]: 2025-11-08 00:07:25.061 [INFO][5296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.198/26] IPv6=[] ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" HandleID="k8s-pod-network.03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.094696 containerd[1717]: 2025-11-08 00:07:25.065 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0", GenerateName:"calico-apiserver-847b7fbf74-", Namespace:"calico-apiserver", SelfLink:"", UID:"8dcb36b7-7066-4355-aa27-d1ae27c36df5", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b7fbf74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"calico-apiserver-847b7fbf74-mcdn7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali304e0432b93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:25.094696 containerd[1717]: 2025-11-08 00:07:25.065 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.198/32] ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.094696 containerd[1717]: 2025-11-08 00:07:25.065 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali304e0432b93 ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.094696 containerd[1717]: 2025-11-08 00:07:25.072 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.094696 containerd[1717]: 2025-11-08 00:07:25.074 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0", GenerateName:"calico-apiserver-847b7fbf74-", Namespace:"calico-apiserver", SelfLink:"", UID:"8dcb36b7-7066-4355-aa27-d1ae27c36df5", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b7fbf74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f", Pod:"calico-apiserver-847b7fbf74-mcdn7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali304e0432b93", MAC:"ca:7d:fe:07:71:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:25.094696 containerd[1717]: 2025-11-08 00:07:25.090 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f" Namespace="calico-apiserver" Pod="calico-apiserver-847b7fbf74-mcdn7" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:25.116319 containerd[1717]: time="2025-11-08T00:07:25.116212809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:25.116319 containerd[1717]: time="2025-11-08T00:07:25.116277249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:25.116557 containerd[1717]: time="2025-11-08T00:07:25.116304328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:25.116557 containerd[1717]: time="2025-11-08T00:07:25.116398768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:25.139817 systemd[1]: Started cri-containerd-03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f.scope - libcontainer container 03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f. Nov 8 00:07:25.173737 systemd-networkd[1359]: cali6cd9d430337: Link UP Nov 8 00:07:25.174597 systemd-networkd[1359]: cali6cd9d430337: Gained carrier Nov 8 00:07:25.207918 containerd[1717]: time="2025-11-08T00:07:25.207873845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847b7fbf74-mcdn7,Uid:8dcb36b7-7066-4355-aa27-d1ae27c36df5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f\"" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:24.961 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0 goldmane-7c778bb748- calico-system fb0c2d00-8a9d-4218-9dbc-6f07fda31565 1008 0 2025-11-08 00:06:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 goldmane-7c778bb748-9wxxn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6cd9d430337 [] [] }} ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:24.962 [INFO][5272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.014 [INFO][5298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" HandleID="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.015 [INFO][5298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" HandleID="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-5561f33395", "pod":"goldmane-7c778bb748-9wxxn", "timestamp":"2025-11-08 00:07:25.01477895 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.015 [INFO][5298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.061 [INFO][5298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.061 [INFO][5298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.123 [INFO][5298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.129 [INFO][5298] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.142 [INFO][5298] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.145 [INFO][5298] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.148 [INFO][5298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.149 [INFO][5298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.151 [INFO][5298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.156 [INFO][5298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.166 [INFO][5298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.199/26] block=192.168.38.192/26 handle="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.166 [INFO][5298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.199/26] handle="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.166 [INFO][5298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:25.209960 containerd[1717]: 2025-11-08 00:07:25.167 [INFO][5298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.199/26] IPv6=[] ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" HandleID="k8s-pod-network.482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.210519 containerd[1717]: 2025-11-08 00:07:25.169 [INFO][5272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fb0c2d00-8a9d-4218-9dbc-6f07fda31565", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"goldmane-7c778bb748-9wxxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6cd9d430337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:25.210519 containerd[1717]: 2025-11-08 00:07:25.169 [INFO][5272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.199/32] ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.210519 containerd[1717]: 2025-11-08 00:07:25.169 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cd9d430337 ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.210519 containerd[1717]: 2025-11-08 00:07:25.174 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.210519 containerd[1717]: 2025-11-08 00:07:25.175 [INFO][5272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fb0c2d00-8a9d-4218-9dbc-6f07fda31565", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa", Pod:"goldmane-7c778bb748-9wxxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6cd9d430337", MAC:"ba:fe:8c:b6:58:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:25.210519 containerd[1717]: 2025-11-08 00:07:25.199 [INFO][5272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa" Namespace="calico-system" Pod="goldmane-7c778bb748-9wxxn" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:25.215130 containerd[1717]: time="2025-11-08T00:07:25.213166835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:25.260684 containerd[1717]: time="2025-11-08T00:07:25.260389791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:25.260684 containerd[1717]: time="2025-11-08T00:07:25.260451710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:25.260684 containerd[1717]: time="2025-11-08T00:07:25.260474590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:25.260684 containerd[1717]: time="2025-11-08T00:07:25.260575710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:25.287572 systemd[1]: Started cri-containerd-482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa.scope - libcontainer container 482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa. Nov 8 00:07:25.341781 containerd[1717]: time="2025-11-08T00:07:25.341730045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9wxxn,Uid:fb0c2d00-8a9d-4218-9dbc-6f07fda31565,Namespace:calico-system,Attempt:1,} returns sandbox id \"482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa\"" Nov 8 00:07:25.517363 containerd[1717]: time="2025-11-08T00:07:25.515748893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:25.522495 containerd[1717]: time="2025-11-08T00:07:25.522342282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:25.522495 containerd[1717]: time="2025-11-08T00:07:25.522420362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:25.523355 kubelet[3181]: E1108 00:07:25.522867 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:25.523355 kubelet[3181]: E1108 00:07:25.522916 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:25.523355 kubelet[3181]: E1108 00:07:25.523106 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:25.523355 kubelet[3181]: E1108 00:07:25.523141 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:25.526087 containerd[1717]: time="2025-11-08T00:07:25.524074959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:07:25.578100 containerd[1717]: time="2025-11-08T00:07:25.577409223Z" level=info msg="StopPodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\"" Nov 8 00:07:25.578100 containerd[1717]: time="2025-11-08T00:07:25.577789302Z" level=info msg="StopPodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\"" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.651 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.652 [INFO][5427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" iface="eth0" netns="/var/run/netns/cni-46b98e79-f7eb-0c3c-089b-54d1900c3e08" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.653 [INFO][5427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" iface="eth0" netns="/var/run/netns/cni-46b98e79-f7eb-0c3c-089b-54d1900c3e08" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.656 [INFO][5427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" iface="eth0" netns="/var/run/netns/cni-46b98e79-f7eb-0c3c-089b-54d1900c3e08" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.656 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.656 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.691 [INFO][5440] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.692 [INFO][5440] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.692 [INFO][5440] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.708 [WARNING][5440] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.708 [INFO][5440] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.709 [INFO][5440] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:25.715400 containerd[1717]: 2025-11-08 00:07:25.712 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:25.718104 containerd[1717]: time="2025-11-08T00:07:25.718021251Z" level=info msg="TearDown network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" successfully" Nov 8 00:07:25.718104 containerd[1717]: time="2025-11-08T00:07:25.718058571Z" level=info msg="StopPodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" returns successfully" Nov 8 00:07:25.720834 systemd[1]: run-netns-cni\x2d46b98e79\x2df7eb\x2d0c3c\x2d089b\x2d54d1900c3e08.mount: Deactivated successfully. Nov 8 00:07:25.727195 containerd[1717]: time="2025-11-08T00:07:25.727156755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74489dd677-kvxft,Uid:41d4b3ef-ef6f-40aa-890a-556514760a53,Namespace:calico-system,Attempt:1,}" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.654 [INFO][5428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.656 [INFO][5428] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" iface="eth0" netns="/var/run/netns/cni-7bbb7c70-7dfa-e3ce-e984-a30e216cfb3b" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.656 [INFO][5428] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" iface="eth0" netns="/var/run/netns/cni-7bbb7c70-7dfa-e3ce-e984-a30e216cfb3b" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.658 [INFO][5428] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" iface="eth0" netns="/var/run/netns/cni-7bbb7c70-7dfa-e3ce-e984-a30e216cfb3b" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.658 [INFO][5428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.658 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.692 [INFO][5445] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.692 [INFO][5445] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.709 [INFO][5445] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.725 [WARNING][5445] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.725 [INFO][5445] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.728 [INFO][5445] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:25.734006 containerd[1717]: 2025-11-08 00:07:25.730 [INFO][5428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:25.734006 containerd[1717]: time="2025-11-08T00:07:25.732767545Z" level=info msg="TearDown network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" successfully" Nov 8 00:07:25.734006 containerd[1717]: time="2025-11-08T00:07:25.732795225Z" level=info msg="StopPodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" returns successfully" Nov 8 00:07:25.736424 systemd[1]: run-netns-cni\x2d7bbb7c70\x2d7dfa\x2de3ce\x2de984\x2da30e216cfb3b.mount: Deactivated successfully. Nov 8 00:07:25.740028 containerd[1717]: time="2025-11-08T00:07:25.739981052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7cc89,Uid:1eab03fd-9695-41da-8445-49749eaa2864,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:07:25.801760 containerd[1717]: time="2025-11-08T00:07:25.801721582Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:25.806035 containerd[1717]: time="2025-11-08T00:07:25.805764254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:07:25.806035 containerd[1717]: time="2025-11-08T00:07:25.805913454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:25.806827 kubelet[3181]: E1108 00:07:25.806371 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:25.806909 kubelet[3181]: E1108 00:07:25.806839 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:25.807032 kubelet[3181]: E1108 00:07:25.806924 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:25.807069 kubelet[3181]: E1108 00:07:25.807024 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:25.855722 kubelet[3181]: E1108 00:07:25.852758 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:25.860000 kubelet[3181]: E1108 00:07:25.859944 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:26.000297 systemd-networkd[1359]: cali6cbcd481b17: Link UP Nov 8 00:07:26.001544 systemd-networkd[1359]: cali6cbcd481b17: Gained carrier Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.871 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0 calico-kube-controllers-74489dd677- calico-system 41d4b3ef-ef6f-40aa-890a-556514760a53 1035 0 2025-11-08 00:06:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74489dd677 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 calico-kube-controllers-74489dd677-kvxft eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6cbcd481b17 [] [] }} ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.874 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.946 [INFO][5480] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" HandleID="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.946 [INFO][5480] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" HandleID="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-5561f33395", "pod":"calico-kube-controllers-74489dd677-kvxft", "timestamp":"2025-11-08 00:07:25.946130603 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.946 [INFO][5480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.946 [INFO][5480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.946 [INFO][5480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.957 [INFO][5480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.961 [INFO][5480] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.966 [INFO][5480] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.968 [INFO][5480] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.971 [INFO][5480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.971 [INFO][5480] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.973 [INFO][5480] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4 Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.981 [INFO][5480] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.993 [INFO][5480] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.200/26] block=192.168.38.192/26 handle="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.993 [INFO][5480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.200/26] handle="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.993 [INFO][5480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:26.030253 containerd[1717]: 2025-11-08 00:07:25.993 [INFO][5480] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.200/26] IPv6=[] ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" HandleID="k8s-pod-network.62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.030830 containerd[1717]: 2025-11-08 00:07:25.997 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0", GenerateName:"calico-kube-controllers-74489dd677-", Namespace:"calico-system", SelfLink:"", UID:"41d4b3ef-ef6f-40aa-890a-556514760a53", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74489dd677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"calico-kube-controllers-74489dd677-kvxft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6cbcd481b17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:26.030830 containerd[1717]: 2025-11-08 00:07:25.997 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.200/32] ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.030830 containerd[1717]: 2025-11-08 00:07:25.997 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cbcd481b17 ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.030830 containerd[1717]: 2025-11-08 00:07:26.004 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.030830 containerd[1717]: 2025-11-08 00:07:26.006 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0", GenerateName:"calico-kube-controllers-74489dd677-", Namespace:"calico-system", SelfLink:"", UID:"41d4b3ef-ef6f-40aa-890a-556514760a53", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74489dd677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4", Pod:"calico-kube-controllers-74489dd677-kvxft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6cbcd481b17", MAC:"2e:73:09:7e:cd:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:26.030830 containerd[1717]: 2025-11-08 00:07:26.026 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4" Namespace="calico-system" Pod="calico-kube-controllers-74489dd677-kvxft" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:26.057541 containerd[1717]: time="2025-11-08T00:07:26.056860285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:26.057541 containerd[1717]: time="2025-11-08T00:07:26.056918165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:26.057541 containerd[1717]: time="2025-11-08T00:07:26.056958525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:26.057541 containerd[1717]: time="2025-11-08T00:07:26.057065765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:26.079625 systemd[1]: Started cri-containerd-62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4.scope - libcontainer container 62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4. Nov 8 00:07:26.131889 systemd-networkd[1359]: cali917d6e0f39d: Link UP Nov 8 00:07:26.133820 systemd-networkd[1359]: cali917d6e0f39d: Gained carrier Nov 8 00:07:26.140683 containerd[1717]: time="2025-11-08T00:07:26.140617775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74489dd677-kvxft,Uid:41d4b3ef-ef6f-40aa-890a-556514760a53,Namespace:calico-system,Attempt:1,} returns sandbox id \"62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4\"" Nov 8 00:07:26.146993 containerd[1717]: time="2025-11-08T00:07:26.146701244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.874 [INFO][5457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0 calico-apiserver-d7c9d7554- calico-apiserver 1eab03fd-9695-41da-8445-49749eaa2864 1036 0 2025-11-08 00:06:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d7c9d7554 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-5561f33395 calico-apiserver-d7c9d7554-7cc89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali917d6e0f39d [] [] }} ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.875 [INFO][5457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.952 [INFO][5485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" HandleID="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.952 [INFO][5485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" HandleID="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-5561f33395", "pod":"calico-apiserver-d7c9d7554-7cc89", "timestamp":"2025-11-08 00:07:25.952526552 +0000 UTC"}, Hostname:"ci-4081.3.6-n-5561f33395", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.952 [INFO][5485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.993 [INFO][5485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:25.993 [INFO][5485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-5561f33395' Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.063 [INFO][5485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.077 [INFO][5485] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.084 [INFO][5485] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.087 [INFO][5485] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.094 [INFO][5485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.094 [INFO][5485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.097 [INFO][5485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.107 [INFO][5485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.121 [INFO][5485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.201/26] block=192.168.38.192/26 handle="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.122 [INFO][5485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.201/26] handle="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" host="ci-4081.3.6-n-5561f33395" Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.122 [INFO][5485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:26.156424 containerd[1717]: 2025-11-08 00:07:26.122 [INFO][5485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.201/26] IPv6=[] ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" HandleID="k8s-pod-network.b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.157779 containerd[1717]: 2025-11-08 00:07:26.124 [INFO][5457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"1eab03fd-9695-41da-8445-49749eaa2864", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"", Pod:"calico-apiserver-d7c9d7554-7cc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali917d6e0f39d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:26.157779 containerd[1717]: 2025-11-08 00:07:26.125 [INFO][5457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.201/32] ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.157779 containerd[1717]: 2025-11-08 00:07:26.125 [INFO][5457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali917d6e0f39d ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.157779 containerd[1717]: 2025-11-08 00:07:26.130 [INFO][5457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.157779 containerd[1717]: 2025-11-08 00:07:26.135 [INFO][5457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"1eab03fd-9695-41da-8445-49749eaa2864", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b", Pod:"calico-apiserver-d7c9d7554-7cc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali917d6e0f39d", MAC:"06:00:e3:ad:33:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:26.157779 containerd[1717]: 2025-11-08 00:07:26.153 [INFO][5457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b" Namespace="calico-apiserver" Pod="calico-apiserver-d7c9d7554-7cc89" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:26.186877 containerd[1717]: time="2025-11-08T00:07:26.183488298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:07:26.186877 containerd[1717]: time="2025-11-08T00:07:26.183539978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:07:26.186877 containerd[1717]: time="2025-11-08T00:07:26.183564658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:26.186877 containerd[1717]: time="2025-11-08T00:07:26.183648418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:07:26.208829 systemd[1]: Started cri-containerd-b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b.scope - libcontainer container b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b. Nov 8 00:07:26.246115 containerd[1717]: time="2025-11-08T00:07:26.246070866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7c9d7554-7cc89,Uid:1eab03fd-9695-41da-8445-49749eaa2864,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b\"" Nov 8 00:07:26.288157 systemd-networkd[1359]: cali6cd9d430337: Gained IPv6LL Nov 8 00:07:26.502383 containerd[1717]: time="2025-11-08T00:07:26.502290648Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:26.506687 containerd[1717]: time="2025-11-08T00:07:26.506630480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:07:26.506800 containerd[1717]: time="2025-11-08T00:07:26.506758040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:26.507009 kubelet[3181]: E1108 00:07:26.506962 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:26.507058 kubelet[3181]: E1108 00:07:26.507011 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:26.507202 kubelet[3181]: E1108 00:07:26.507175 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:26.507273 kubelet[3181]: E1108 00:07:26.507220 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:26.508443 containerd[1717]: time="2025-11-08T00:07:26.508412157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:26.800172 systemd-networkd[1359]: cali304e0432b93: Gained IPv6LL Nov 8 00:07:26.800875 containerd[1717]: time="2025-11-08T00:07:26.800774874Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:26.804692 containerd[1717]: time="2025-11-08T00:07:26.804638147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:26.805061 containerd[1717]: time="2025-11-08T00:07:26.804750387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:26.805124 kubelet[3181]: E1108 00:07:26.804905 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:26.805124 kubelet[3181]: E1108 00:07:26.804970 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:26.805124 kubelet[3181]: E1108 00:07:26.805053 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:26.805124 kubelet[3181]: E1108 00:07:26.805092 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:26.863708 kubelet[3181]: E1108 00:07:26.863288 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:26.867277 kubelet[3181]: E1108 00:07:26.867148 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:26.867277 kubelet[3181]: E1108 00:07:26.867222 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:26.867867 kubelet[3181]: E1108 00:07:26.867734 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:27.870686 kubelet[3181]: E1108 00:07:27.870448 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:27.870686 kubelet[3181]: E1108 00:07:27.870497 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:27.952113 systemd-networkd[1359]: cali917d6e0f39d: Gained IPv6LL Nov 8 00:07:27.953071 systemd-networkd[1359]: cali6cbcd481b17: Gained IPv6LL Nov 8 00:07:30.577549 containerd[1717]: time="2025-11-08T00:07:30.576685018Z" level=info msg="StopPodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\"" Nov 8 00:07:30.588487 containerd[1717]: time="2025-11-08T00:07:30.588214957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.636 [WARNING][5613] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0", GenerateName:"calico-kube-controllers-74489dd677-", Namespace:"calico-system", SelfLink:"", UID:"41d4b3ef-ef6f-40aa-890a-556514760a53", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74489dd677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4", Pod:"calico-kube-controllers-74489dd677-kvxft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6cbcd481b17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.636 [INFO][5613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.636 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" iface="eth0" netns="" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.636 [INFO][5613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.636 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.683 [INFO][5620] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.684 [INFO][5620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.684 [INFO][5620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.701 [WARNING][5620] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.701 [INFO][5620] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.705 [INFO][5620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:30.709972 containerd[1717]: 2025-11-08 00:07:30.707 [INFO][5613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.710876 containerd[1717]: time="2025-11-08T00:07:30.709530821Z" level=info msg="TearDown network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" successfully" Nov 8 00:07:30.710917 containerd[1717]: time="2025-11-08T00:07:30.710880178Z" level=info msg="StopPodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" returns successfully" Nov 8 00:07:30.711580 containerd[1717]: time="2025-11-08T00:07:30.711550657Z" level=info msg="RemovePodSandbox for \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\"" Nov 8 00:07:30.712854 containerd[1717]: time="2025-11-08T00:07:30.712818655Z" level=info msg="Forcibly stopping sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\"" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.748 [WARNING][5634] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0", GenerateName:"calico-kube-controllers-74489dd677-", Namespace:"calico-system", SelfLink:"", UID:"41d4b3ef-ef6f-40aa-890a-556514760a53", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74489dd677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"62d710e245c87d7ff5d9edde8dbddb94f0dda434f9acbe93626775343e5127a4", Pod:"calico-kube-controllers-74489dd677-kvxft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6cbcd481b17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.749 [INFO][5634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.749 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" iface="eth0" netns="" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.749 [INFO][5634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.749 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.788 [INFO][5641] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.788 [INFO][5641] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.789 [INFO][5641] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.797 [WARNING][5641] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.797 [INFO][5641] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" HandleID="k8s-pod-network.45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--kube--controllers--74489dd677--kvxft-eth0" Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.799 [INFO][5641] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:30.803810 containerd[1717]: 2025-11-08 00:07:30.801 [INFO][5634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6" Nov 8 00:07:30.803810 containerd[1717]: time="2025-11-08T00:07:30.803598453Z" level=info msg="TearDown network for sandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" successfully" Nov 8 00:07:30.814343 containerd[1717]: time="2025-11-08T00:07:30.813988674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:30.814343 containerd[1717]: time="2025-11-08T00:07:30.814087154Z" level=info msg="RemovePodSandbox \"45956a840f0fee118e0bb5b54b5aa5ae87a22e0f305f21276c98f053612f15a6\" returns successfully" Nov 8 00:07:30.815169 containerd[1717]: time="2025-11-08T00:07:30.814888433Z" level=info msg="StopPodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\"" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.855 [WARNING][5656] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcb0f449-555a-4f1a-a70d-fed8686a31f6", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a", Pod:"coredns-66bc5c9577-dpwnt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c6b9bba71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.856 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.856 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" iface="eth0" netns="" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.856 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.856 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.877 [INFO][5663] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.877 [INFO][5663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.877 [INFO][5663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.889 [WARNING][5663] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.889 [INFO][5663] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.891 [INFO][5663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:30.894242 containerd[1717]: 2025-11-08 00:07:30.892 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.895343 containerd[1717]: time="2025-11-08T00:07:30.894795490Z" level=info msg="TearDown network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" successfully" Nov 8 00:07:30.895343 containerd[1717]: time="2025-11-08T00:07:30.894824330Z" level=info msg="StopPodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" returns successfully" Nov 8 00:07:30.896239 containerd[1717]: time="2025-11-08T00:07:30.896208968Z" level=info msg="RemovePodSandbox for \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\"" Nov 8 00:07:30.896423 containerd[1717]: time="2025-11-08T00:07:30.896248328Z" level=info msg="Forcibly stopping sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\"" Nov 8 00:07:30.896693 containerd[1717]: time="2025-11-08T00:07:30.896571607Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:30.903028 containerd[1717]: time="2025-11-08T00:07:30.902819396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:30.903028 containerd[1717]: time="2025-11-08T00:07:30.902923076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:30.903399 kubelet[3181]: E1108 00:07:30.903096 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:30.903399 kubelet[3181]: E1108 00:07:30.903143 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:30.903399 kubelet[3181]: E1108 00:07:30.903214 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:30.906667 containerd[1717]: time="2025-11-08T00:07:30.906634909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.942 [WARNING][5677] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcb0f449-555a-4f1a-a70d-fed8686a31f6", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"183be3273e5701d72c584a53848c110b3e4b4e63124eedacc7e6169744039c3a", Pod:"coredns-66bc5c9577-dpwnt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5c6b9bba71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.942 [INFO][5677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.942 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" iface="eth0" netns="" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.942 [INFO][5677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.942 [INFO][5677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.963 [INFO][5685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.963 [INFO][5685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.963 [INFO][5685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.971 [WARNING][5685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.971 [INFO][5685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" HandleID="k8s-pod-network.61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--dpwnt-eth0" Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.972 [INFO][5685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:30.975723 containerd[1717]: 2025-11-08 00:07:30.974 [INFO][5677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe" Nov 8 00:07:30.976277 containerd[1717]: time="2025-11-08T00:07:30.975778306Z" level=info msg="TearDown network for sandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" successfully" Nov 8 00:07:30.997188 containerd[1717]: time="2025-11-08T00:07:30.997140388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:30.997326 containerd[1717]: time="2025-11-08T00:07:30.997233787Z" level=info msg="RemovePodSandbox \"61732608fa9a85d1d4658a38b2985c943bcd378327cf939364ef022953c37fbe\" returns successfully" Nov 8 00:07:30.997827 containerd[1717]: time="2025-11-08T00:07:30.997800786Z" level=info msg="StopPodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\"" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.036 [WARNING][5699] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.036 [INFO][5699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.036 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" iface="eth0" netns="" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.036 [INFO][5699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.036 [INFO][5699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.056 [INFO][5707] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.056 [INFO][5707] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.056 [INFO][5707] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.065 [WARNING][5707] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.065 [INFO][5707] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.068 [INFO][5707] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.071973 containerd[1717]: 2025-11-08 00:07:31.070 [INFO][5699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.071973 containerd[1717]: time="2025-11-08T00:07:31.071802214Z" level=info msg="TearDown network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" successfully" Nov 8 00:07:31.071973 containerd[1717]: time="2025-11-08T00:07:31.071828054Z" level=info msg="StopPodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" returns successfully" Nov 8 00:07:31.072576 containerd[1717]: time="2025-11-08T00:07:31.072552493Z" level=info msg="RemovePodSandbox for \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\"" Nov 8 00:07:31.072616 containerd[1717]: time="2025-11-08T00:07:31.072585853Z" level=info msg="Forcibly stopping sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\"" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.130 [WARNING][5722] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" WorkloadEndpoint="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.130 [INFO][5722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.131 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" iface="eth0" netns="" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.131 [INFO][5722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.131 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.155 [INFO][5729] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.155 [INFO][5729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.155 [INFO][5729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.166 [WARNING][5729] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.166 [INFO][5729] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" HandleID="k8s-pod-network.ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Workload="ci--4081.3.6--n--5561f33395-k8s-whisker--7bfb4c996--bh8m8-eth0" Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.169 [INFO][5729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.173783 containerd[1717]: 2025-11-08 00:07:31.171 [INFO][5722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985" Nov 8 00:07:31.174870 containerd[1717]: time="2025-11-08T00:07:31.173778152Z" level=info msg="TearDown network for sandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" successfully" Nov 8 00:07:31.184147 containerd[1717]: time="2025-11-08T00:07:31.184010894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:31.184147 containerd[1717]: time="2025-11-08T00:07:31.184092814Z" level=info msg="RemovePodSandbox \"ad28f316f5a8ec429721a46976d6eed1650a741f3db64e7faaaff673405b8985\" returns successfully" Nov 8 00:07:31.185101 containerd[1717]: time="2025-11-08T00:07:31.185068252Z" level=info msg="StopPodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\"" Nov 8 00:07:31.195843 containerd[1717]: time="2025-11-08T00:07:31.195793593Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:31.201553 containerd[1717]: time="2025-11-08T00:07:31.201025104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:31.201553 containerd[1717]: time="2025-11-08T00:07:31.201156664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:31.201732 kubelet[3181]: E1108 00:07:31.201276 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:31.201732 kubelet[3181]: E1108 00:07:31.201322 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:31.201732 kubelet[3181]: E1108 00:07:31.201390 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:31.201828 kubelet[3181]: E1108 00:07:31.201432 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.241 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fb0c2d00-8a9d-4218-9dbc-6f07fda31565", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa", Pod:"goldmane-7c778bb748-9wxxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6cd9d430337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.241 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.241 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" iface="eth0" netns="" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.241 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.241 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.270 [INFO][5752] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.271 [INFO][5752] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.271 [INFO][5752] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.282 [WARNING][5752] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.282 [INFO][5752] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.283 [INFO][5752] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.286535 containerd[1717]: 2025-11-08 00:07:31.285 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.287683 containerd[1717]: time="2025-11-08T00:07:31.286578511Z" level=info msg="TearDown network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" successfully" Nov 8 00:07:31.287683 containerd[1717]: time="2025-11-08T00:07:31.286606071Z" level=info msg="StopPodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" returns successfully" Nov 8 00:07:31.287683 containerd[1717]: time="2025-11-08T00:07:31.287383350Z" level=info msg="RemovePodSandbox for \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\"" Nov 8 00:07:31.287683 containerd[1717]: time="2025-11-08T00:07:31.287416510Z" level=info msg="Forcibly stopping sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\"" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.331 [WARNING][5767] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fb0c2d00-8a9d-4218-9dbc-6f07fda31565", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"482cd334418fbbee45d63c02de6bf6caca772ffb01243c8ff7e5c04e489abcaa", Pod:"goldmane-7c778bb748-9wxxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6cd9d430337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.333 [INFO][5767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.333 [INFO][5767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" iface="eth0" netns="" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.333 [INFO][5767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.333 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.359 [INFO][5774] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.359 [INFO][5774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.359 [INFO][5774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.368 [WARNING][5774] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.370 [INFO][5774] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" HandleID="k8s-pod-network.b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Workload="ci--4081.3.6--n--5561f33395-k8s-goldmane--7c778bb748--9wxxn-eth0" Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.371 [INFO][5774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.374679 containerd[1717]: 2025-11-08 00:07:31.373 [INFO][5767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26" Nov 8 00:07:31.375154 containerd[1717]: time="2025-11-08T00:07:31.374719114Z" level=info msg="TearDown network for sandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" successfully" Nov 8 00:07:31.384778 containerd[1717]: time="2025-11-08T00:07:31.384004217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:31.384778 containerd[1717]: time="2025-11-08T00:07:31.384103777Z" level=info msg="RemovePodSandbox \"b81491f943ff1f67ad26356d36eb1bf40f07ca1d731f5cb6cf42d3766acf4b26\" returns successfully" Nov 8 00:07:31.387220 containerd[1717]: time="2025-11-08T00:07:31.386345173Z" level=info msg="StopPodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\"" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.444 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b089b199-ec3e-4716-9f14-e24ffa6fbbc3", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13", Pod:"coredns-66bc5c9577-x5nnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa3b4eb6309", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.445 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.445 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" iface="eth0" netns="" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.445 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.445 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.483 [INFO][5795] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.485 [INFO][5795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.486 [INFO][5795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.497 [WARNING][5795] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.497 [INFO][5795] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.498 [INFO][5795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.503296 containerd[1717]: 2025-11-08 00:07:31.501 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.504927 containerd[1717]: time="2025-11-08T00:07:31.504071283Z" level=info msg="TearDown network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" successfully" Nov 8 00:07:31.504927 containerd[1717]: time="2025-11-08T00:07:31.504110003Z" level=info msg="StopPodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" returns successfully" Nov 8 00:07:31.506672 containerd[1717]: time="2025-11-08T00:07:31.506627879Z" level=info msg="RemovePodSandbox for \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\"" Nov 8 00:07:31.506672 containerd[1717]: time="2025-11-08T00:07:31.506669279Z" level=info msg="Forcibly stopping sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\"" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.550 [WARNING][5809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b089b199-ec3e-4716-9f14-e24ffa6fbbc3", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"a04324449bf9db42d5929d9fb7a0b0f91318289da819d79a4b5aa1f618089c13", Pod:"coredns-66bc5c9577-x5nnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa3b4eb6309", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.551 [INFO][5809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.551 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" iface="eth0" netns="" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.551 [INFO][5809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.551 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.576 [INFO][5816] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.576 [INFO][5816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.576 [INFO][5816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.589 [WARNING][5816] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.589 [INFO][5816] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" HandleID="k8s-pod-network.49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Workload="ci--4081.3.6--n--5561f33395-k8s-coredns--66bc5c9577--x5nnq-eth0" Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.591 [INFO][5816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.594895 containerd[1717]: 2025-11-08 00:07:31.593 [INFO][5809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a" Nov 8 00:07:31.595589 containerd[1717]: time="2025-11-08T00:07:31.594950481Z" level=info msg="TearDown network for sandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" successfully" Nov 8 00:07:31.603593 containerd[1717]: time="2025-11-08T00:07:31.603524626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:31.603593 containerd[1717]: time="2025-11-08T00:07:31.603598826Z" level=info msg="RemovePodSandbox \"49b79afcddc0cd238a06d5ff379dea20d69c340c08c2abdaefa56c529447fa6a\" returns successfully" Nov 8 00:07:31.604454 containerd[1717]: time="2025-11-08T00:07:31.604164385Z" level=info msg="StopPodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\"" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.656 [WARNING][5830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"1eab03fd-9695-41da-8445-49749eaa2864", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b", Pod:"calico-apiserver-d7c9d7554-7cc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali917d6e0f39d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.657 [INFO][5830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.657 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" iface="eth0" netns="" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.657 [INFO][5830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.657 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.685 [INFO][5837] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.685 [INFO][5837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.685 [INFO][5837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.696 [WARNING][5837] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.696 [INFO][5837] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.697 [INFO][5837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.702408 containerd[1717]: 2025-11-08 00:07:31.699 [INFO][5830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.703069 containerd[1717]: time="2025-11-08T00:07:31.702874169Z" level=info msg="TearDown network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" successfully" Nov 8 00:07:31.703069 containerd[1717]: time="2025-11-08T00:07:31.702918048Z" level=info msg="StopPodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" returns successfully" Nov 8 00:07:31.703904 containerd[1717]: time="2025-11-08T00:07:31.703793447Z" level=info msg="RemovePodSandbox for \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\"" Nov 8 00:07:31.703904 containerd[1717]: time="2025-11-08T00:07:31.703844207Z" level=info msg="Forcibly stopping sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\"" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.743 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"1eab03fd-9695-41da-8445-49749eaa2864", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"b7a6f7c39693b5cf7bf09993da28fc24a6a412d300e42c1e7574ea8407f8b47b", Pod:"calico-apiserver-d7c9d7554-7cc89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali917d6e0f39d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.743 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.743 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" iface="eth0" netns="" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.743 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.743 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.777 [INFO][5858] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.777 [INFO][5858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.777 [INFO][5858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.789 [WARNING][5858] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.790 [INFO][5858] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" HandleID="k8s-pod-network.8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7cc89-eth0" Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.796 [INFO][5858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.804029 containerd[1717]: 2025-11-08 00:07:31.799 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d" Nov 8 00:07:31.804029 containerd[1717]: time="2025-11-08T00:07:31.803587549Z" level=info msg="TearDown network for sandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" successfully" Nov 8 00:07:31.814319 containerd[1717]: time="2025-11-08T00:07:31.814272650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:31.815339 containerd[1717]: time="2025-11-08T00:07:31.815119488Z" level=info msg="RemovePodSandbox \"8afbdf52d9b330bd223ce70fa5d7063b38ad2e06ffd953aab784dec7bb15658d\" returns successfully" Nov 8 00:07:31.816337 containerd[1717]: time="2025-11-08T00:07:31.816186446Z" level=info msg="StopPodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\"" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.859 [WARNING][5872] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0", GenerateName:"calico-apiserver-847b7fbf74-", Namespace:"calico-apiserver", SelfLink:"", UID:"8dcb36b7-7066-4355-aa27-d1ae27c36df5", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b7fbf74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f", Pod:"calico-apiserver-847b7fbf74-mcdn7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali304e0432b93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.859 [INFO][5872] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.859 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" iface="eth0" netns="" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.859 [INFO][5872] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.859 [INFO][5872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.884 [INFO][5879] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.885 [INFO][5879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.885 [INFO][5879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.903 [WARNING][5879] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.903 [INFO][5879] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.906 [INFO][5879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:31.911079 containerd[1717]: 2025-11-08 00:07:31.909 [INFO][5872] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:31.911783 containerd[1717]: time="2025-11-08T00:07:31.911124717Z" level=info msg="TearDown network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" successfully" Nov 8 00:07:31.911783 containerd[1717]: time="2025-11-08T00:07:31.911151477Z" level=info msg="StopPodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" returns successfully" Nov 8 00:07:31.911783 containerd[1717]: time="2025-11-08T00:07:31.911640116Z" level=info msg="RemovePodSandbox for \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\"" Nov 8 00:07:31.911783 containerd[1717]: time="2025-11-08T00:07:31.911668316Z" level=info msg="Forcibly stopping sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\"" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.958 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0", GenerateName:"calico-apiserver-847b7fbf74-", Namespace:"calico-apiserver", SelfLink:"", UID:"8dcb36b7-7066-4355-aa27-d1ae27c36df5", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847b7fbf74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"03f2b259b05cba42eceaa1012bafd48f17d5658ccb82e2591a87d8c5cd47fb2f", Pod:"calico-apiserver-847b7fbf74-mcdn7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali304e0432b93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.958 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.959 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" iface="eth0" netns="" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.959 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.959 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.983 [INFO][5901] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.984 [INFO][5901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:31.984 [INFO][5901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:32.005 [WARNING][5901] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:32.005 [INFO][5901] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" HandleID="k8s-pod-network.fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--847b7fbf74--mcdn7-eth0" Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:32.008 [INFO][5901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:32.012464 containerd[1717]: 2025-11-08 00:07:32.010 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb" Nov 8 00:07:32.013362 containerd[1717]: time="2025-11-08T00:07:32.012507656Z" level=info msg="TearDown network for sandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" successfully" Nov 8 00:07:32.023037 containerd[1717]: time="2025-11-08T00:07:32.022930638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:32.023037 containerd[1717]: time="2025-11-08T00:07:32.023011757Z" level=info msg="RemovePodSandbox \"fb0da778175f438d0ad189f23239af593ee38e92cb21e1c7300fca32b04e7ffb\" returns successfully" Nov 8 00:07:32.023601 containerd[1717]: time="2025-11-08T00:07:32.023575316Z" level=info msg="StopPodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\"" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.103 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70822f24-312d-4073-b204-5c6b6a26eb84", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326", Pod:"csi-node-driver-8jr45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b9ebf29710", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.103 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.103 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" iface="eth0" netns="" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.103 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.103 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.141 [INFO][5922] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.141 [INFO][5922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.141 [INFO][5922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.167 [WARNING][5922] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.167 [INFO][5922] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.169 [INFO][5922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:32.175095 containerd[1717]: 2025-11-08 00:07:32.172 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.175654 containerd[1717]: time="2025-11-08T00:07:32.175138966Z" level=info msg="TearDown network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" successfully" Nov 8 00:07:32.175654 containerd[1717]: time="2025-11-08T00:07:32.175163486Z" level=info msg="StopPodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" returns successfully" Nov 8 00:07:32.176417 containerd[1717]: time="2025-11-08T00:07:32.176015284Z" level=info msg="RemovePodSandbox for \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\"" Nov 8 00:07:32.176417 containerd[1717]: time="2025-11-08T00:07:32.176063204Z" level=info msg="Forcibly stopping sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\"" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.238 [WARNING][5936] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70822f24-312d-4073-b204-5c6b6a26eb84", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"ca8fc6e37f57b2bc3287346f20de377c4835662d1dab265c43563a94e662a326", Pod:"csi-node-driver-8jr45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b9ebf29710", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.242 [INFO][5936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.242 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" iface="eth0" netns="" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.242 [INFO][5936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.242 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.266 [INFO][5943] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.266 [INFO][5943] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.266 [INFO][5943] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.279 [WARNING][5943] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.279 [INFO][5943] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" HandleID="k8s-pod-network.9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Workload="ci--4081.3.6--n--5561f33395-k8s-csi--node--driver--8jr45-eth0" Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.281 [INFO][5943] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:32.286916 containerd[1717]: 2025-11-08 00:07:32.284 [INFO][5936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38" Nov 8 00:07:32.287404 containerd[1717]: time="2025-11-08T00:07:32.286969046Z" level=info msg="TearDown network for sandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" successfully" Nov 8 00:07:32.303723 containerd[1717]: time="2025-11-08T00:07:32.303672257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:32.303874 containerd[1717]: time="2025-11-08T00:07:32.303749857Z" level=info msg="RemovePodSandbox \"9a9ebc22ba48f19b520d39c5f9f5b65465a125f8668f847787dd91216cad3f38\" returns successfully" Nov 8 00:07:32.304662 containerd[1717]: time="2025-11-08T00:07:32.304410855Z" level=info msg="StopPodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\"" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.348 [WARNING][5958] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"aad8189b-54ce-422e-a68f-46b67abadfe8", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57", Pod:"calico-apiserver-d7c9d7554-7phdh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f95b4d25ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.349 [INFO][5958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.349 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" iface="eth0" netns="" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.349 [INFO][5958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.349 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.376 [INFO][5966] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.376 [INFO][5966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.376 [INFO][5966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.386 [WARNING][5966] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.386 [INFO][5966] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.389 [INFO][5966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:32.394185 containerd[1717]: 2025-11-08 00:07:32.391 [INFO][5958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.395371 containerd[1717]: time="2025-11-08T00:07:32.394229015Z" level=info msg="TearDown network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" successfully" Nov 8 00:07:32.395371 containerd[1717]: time="2025-11-08T00:07:32.394254975Z" level=info msg="StopPodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" returns successfully" Nov 8 00:07:32.395371 containerd[1717]: time="2025-11-08T00:07:32.394887294Z" level=info msg="RemovePodSandbox for \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\"" Nov 8 00:07:32.395371 containerd[1717]: time="2025-11-08T00:07:32.395095734Z" level=info msg="Forcibly stopping sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\"" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.436 [WARNING][5980] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0", GenerateName:"calico-apiserver-d7c9d7554-", Namespace:"calico-apiserver", SelfLink:"", UID:"aad8189b-54ce-422e-a68f-46b67abadfe8", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7c9d7554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-5561f33395", ContainerID:"0609008f7483c4f129380f23ab73619dfbadabf46bdb746d041a5175a2cc4d57", Pod:"calico-apiserver-d7c9d7554-7phdh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f95b4d25ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.436 [INFO][5980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.436 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" iface="eth0" netns="" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.436 [INFO][5980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.436 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.462 [INFO][5987] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.462 [INFO][5987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.462 [INFO][5987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.474 [WARNING][5987] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.474 [INFO][5987] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" HandleID="k8s-pod-network.03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Workload="ci--4081.3.6--n--5561f33395-k8s-calico--apiserver--d7c9d7554--7phdh-eth0" Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.475 [INFO][5987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:32.479251 containerd[1717]: 2025-11-08 00:07:32.477 [INFO][5980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9" Nov 8 00:07:32.480757 containerd[1717]: time="2025-11-08T00:07:32.479760063Z" level=info msg="TearDown network for sandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" successfully" Nov 8 00:07:32.489379 containerd[1717]: time="2025-11-08T00:07:32.489205846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:32.489379 containerd[1717]: time="2025-11-08T00:07:32.489280806Z" level=info msg="RemovePodSandbox \"03ba8a382a78f539524f8cb5215503bfea505e04e23ec82a28b5b05493ecafe9\" returns successfully" Nov 8 00:07:37.578767 containerd[1717]: time="2025-11-08T00:07:37.578492403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:07:37.833393 containerd[1717]: time="2025-11-08T00:07:37.833258545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:37.836963 containerd[1717]: time="2025-11-08T00:07:37.836798898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:07:37.837301 containerd[1717]: time="2025-11-08T00:07:37.836982298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:07:37.838058 kubelet[3181]: E1108 00:07:37.837154 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:37.838058 kubelet[3181]: E1108 00:07:37.837229 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:37.838058 kubelet[3181]: E1108 00:07:37.837310 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:37.839274 containerd[1717]: time="2025-11-08T00:07:37.839226494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:07:38.121646 containerd[1717]: time="2025-11-08T00:07:38.121515546Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:38.126031 containerd[1717]: time="2025-11-08T00:07:38.125884698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:07:38.126031 containerd[1717]: time="2025-11-08T00:07:38.125976298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:07:38.126874 kubelet[3181]: E1108 00:07:38.126319 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:38.126874 kubelet[3181]: E1108 00:07:38.126366 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:38.126874 kubelet[3181]: E1108 00:07:38.126433 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:38.127081 kubelet[3181]: E1108 00:07:38.126475 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:38.578280 containerd[1717]: time="2025-11-08T00:07:38.578053165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:38.871390 containerd[1717]: time="2025-11-08T00:07:38.871076438Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:38.877023 containerd[1717]: time="2025-11-08T00:07:38.876850707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:38.877023 containerd[1717]: time="2025-11-08T00:07:38.876988227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:38.877944 kubelet[3181]: E1108 00:07:38.877310 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:38.877944 kubelet[3181]: E1108 00:07:38.877359 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:38.877944 kubelet[3181]: E1108 00:07:38.877523 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:38.877944 kubelet[3181]: E1108 00:07:38.877556 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:38.879020 containerd[1717]: time="2025-11-08T00:07:38.878985703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:39.167973 containerd[1717]: time="2025-11-08T00:07:39.167198225Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:39.171431 containerd[1717]: time="2025-11-08T00:07:39.171303097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:39.171431 containerd[1717]: time="2025-11-08T00:07:39.171376417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:39.172264 kubelet[3181]: E1108 00:07:39.171713 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:39.172264 kubelet[3181]: E1108 00:07:39.171759 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:39.172264 kubelet[3181]: E1108 00:07:39.171824 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:39.172264 kubelet[3181]: E1108 00:07:39.171858 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:39.580629 containerd[1717]: time="2025-11-08T00:07:39.579048204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:39.868590 containerd[1717]: time="2025-11-08T00:07:39.868472123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:39.880086 containerd[1717]: time="2025-11-08T00:07:39.879951223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:39.880086 containerd[1717]: time="2025-11-08T00:07:39.879968703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:39.880602 kubelet[3181]: E1108 00:07:39.880198 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:39.880602 kubelet[3181]: E1108 00:07:39.880243 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:39.880602 kubelet[3181]: E1108 00:07:39.880314 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:39.880602 kubelet[3181]: E1108 00:07:39.880344 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:41.578160 containerd[1717]: time="2025-11-08T00:07:41.578117448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:07:41.849506 containerd[1717]: time="2025-11-08T00:07:41.849369040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:41.854005 containerd[1717]: time="2025-11-08T00:07:41.853927832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:07:41.854143 containerd[1717]: time="2025-11-08T00:07:41.853966871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:41.854692 kubelet[3181]: E1108 00:07:41.854400 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:41.854692 kubelet[3181]: E1108 00:07:41.854454 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:41.854692 kubelet[3181]: E1108 00:07:41.854570 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:41.855586 kubelet[3181]: E1108 00:07:41.854974 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:42.580743 containerd[1717]: time="2025-11-08T00:07:42.580609524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:07:42.872381 containerd[1717]: time="2025-11-08T00:07:42.872116400Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:42.875574 containerd[1717]: time="2025-11-08T00:07:42.875450914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:07:42.875574 containerd[1717]: time="2025-11-08T00:07:42.875529874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:42.875749 kubelet[3181]: E1108 00:07:42.875696 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:42.875749 kubelet[3181]: E1108 00:07:42.875744 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:42.876070 kubelet[3181]: E1108 00:07:42.875816 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:42.876070 kubelet[3181]: E1108 00:07:42.875846 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:45.581538 kubelet[3181]: E1108 00:07:45.581487 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:07:50.578646 kubelet[3181]: E1108 00:07:50.578505 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:07:51.578176 kubelet[3181]: E1108 00:07:51.578098 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:07:52.578450 kubelet[3181]: E1108 00:07:52.578363 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:07:53.578555 kubelet[3181]: E1108 00:07:53.578503 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:07:54.704309 systemd[1]: run-containerd-runc-k8s.io-0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04-runc.zn6q0o.mount: Deactivated successfully. Nov 8 00:07:56.579391 kubelet[3181]: E1108 00:07:56.578664 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:07:56.579391 kubelet[3181]: E1108 00:07:56.579185 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:07:57.581405 containerd[1717]: time="2025-11-08T00:07:57.581156057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:57.849236 containerd[1717]: time="2025-11-08T00:07:57.848775363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:57.853555 containerd[1717]: time="2025-11-08T00:07:57.852221677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:57.853760 containerd[1717]: time="2025-11-08T00:07:57.852291317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:57.854033 kubelet[3181]: E1108 00:07:57.853988 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:57.854372 kubelet[3181]: E1108 00:07:57.854037 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:57.854372 kubelet[3181]: E1108 00:07:57.854115 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:57.856628 containerd[1717]: time="2025-11-08T00:07:57.856310950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:58.145410 containerd[1717]: time="2025-11-08T00:07:58.145146260Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:58.149100 containerd[1717]: time="2025-11-08T00:07:58.148926894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:58.149100 containerd[1717]: time="2025-11-08T00:07:58.148971293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:58.149876 kubelet[3181]: E1108 00:07:58.149354 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:58.149876 kubelet[3181]: E1108 00:07:58.149407 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:58.149876 kubelet[3181]: E1108 00:07:58.149479 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:58.150054 kubelet[3181]: E1108 00:07:58.149518 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:08:03.579014 containerd[1717]: time="2025-11-08T00:08:03.578907734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:08:03.870389 containerd[1717]: time="2025-11-08T00:08:03.870229843Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:03.873850 containerd[1717]: time="2025-11-08T00:08:03.873738316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:08:03.873850 containerd[1717]: time="2025-11-08T00:08:03.873826596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:08:03.875100 kubelet[3181]: E1108 00:08:03.874147 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:08:03.875100 kubelet[3181]: E1108 00:08:03.874196 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:08:03.875100 kubelet[3181]: E1108 00:08:03.874377 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:03.875506 containerd[1717]: time="2025-11-08T00:08:03.875030514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:04.147439 containerd[1717]: time="2025-11-08T00:08:04.147163417Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:04.151773 containerd[1717]: time="2025-11-08T00:08:04.151623129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:04.151773 containerd[1717]: time="2025-11-08T00:08:04.151740569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:04.154446 kubelet[3181]: E1108 00:08:04.152318 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:04.154446 kubelet[3181]: E1108 00:08:04.152364 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:04.154446 kubelet[3181]: E1108 00:08:04.152528 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:04.154446 kubelet[3181]: E1108 00:08:04.152566 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:08:04.155202 containerd[1717]: time="2025-11-08T00:08:04.154969683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:08:04.426143 containerd[1717]: time="2025-11-08T00:08:04.426006308Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:04.431402 containerd[1717]: time="2025-11-08T00:08:04.431280618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:08:04.431402 containerd[1717]: time="2025-11-08T00:08:04.431355418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:08:04.432250 kubelet[3181]: E1108 00:08:04.431566 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:08:04.432250 kubelet[3181]: E1108 00:08:04.431614 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:08:04.432250 kubelet[3181]: E1108 00:08:04.431682 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:04.432384 kubelet[3181]: E1108 00:08:04.431724 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:08:05.578749 containerd[1717]: time="2025-11-08T00:08:05.578629364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:05.858858 containerd[1717]: time="2025-11-08T00:08:05.858421574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:05.876823 containerd[1717]: time="2025-11-08T00:08:05.872286228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:05.877245 containerd[1717]: time="2025-11-08T00:08:05.872362788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:05.877305 kubelet[3181]: E1108 00:08:05.877074 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:05.877305 kubelet[3181]: E1108 00:08:05.877137 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:05.877599 kubelet[3181]: E1108 00:08:05.877356 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:05.878318 kubelet[3181]: E1108 00:08:05.877401 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:08:08.580785 containerd[1717]: time="2025-11-08T00:08:08.579925079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:08:08.879193 containerd[1717]: time="2025-11-08T00:08:08.879055325Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:08.882671 containerd[1717]: time="2025-11-08T00:08:08.882553518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:08:08.882671 containerd[1717]: time="2025-11-08T00:08:08.882625878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:08.883637 kubelet[3181]: E1108 00:08:08.882872 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:08:08.883637 kubelet[3181]: E1108 00:08:08.882927 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:08:08.883637 kubelet[3181]: E1108 00:08:08.883112 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:08.883637 kubelet[3181]: E1108 00:08:08.883143 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:08:08.884021 containerd[1717]: time="2025-11-08T00:08:08.883650276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:09.160681 containerd[1717]: time="2025-11-08T00:08:09.159923965Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:09.163629 containerd[1717]: time="2025-11-08T00:08:09.163356639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:09.163629 containerd[1717]: time="2025-11-08T00:08:09.163467158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:09.164187 kubelet[3181]: E1108 00:08:09.163887 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:09.164187 kubelet[3181]: E1108 00:08:09.163950 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:09.164705 kubelet[3181]: E1108 00:08:09.164025 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:09.164805 kubelet[3181]: E1108 00:08:09.164785 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:08:10.582438 kubelet[3181]: E1108 00:08:10.582379 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:08:11.581625 containerd[1717]: time="2025-11-08T00:08:11.581390603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:08:11.876928 containerd[1717]: time="2025-11-08T00:08:11.876657976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:11.880993 containerd[1717]: time="2025-11-08T00:08:11.880858049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:08:11.880993 containerd[1717]: time="2025-11-08T00:08:11.880961649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:08:11.881601 kubelet[3181]: E1108 00:08:11.881343 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:08:11.881601 kubelet[3181]: E1108 00:08:11.881406 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:08:11.881601 kubelet[3181]: E1108 00:08:11.881499 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:11.882383 kubelet[3181]: E1108 00:08:11.882342 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:08:15.578789 kubelet[3181]: E1108 00:08:15.578740 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:08:18.580513 kubelet[3181]: E1108 00:08:18.580162 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:08:19.578984 kubelet[3181]: E1108 00:08:19.578924 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:08:21.577818 kubelet[3181]: E1108 00:08:21.577769 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:08:24.578701 kubelet[3181]: E1108 00:08:24.578266 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:08:24.580460 kubelet[3181]: E1108 00:08:24.580284 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:08:25.578366 kubelet[3181]: E1108 00:08:25.578314 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:08:28.579195 kubelet[3181]: E1108 00:08:28.579143 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:08:30.578989 kubelet[3181]: E1108 00:08:30.578670 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:08:31.578964 kubelet[3181]: E1108 00:08:31.578824 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:08:36.579327 kubelet[3181]: E1108 00:08:36.579170 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:08:36.584020 kubelet[3181]: E1108 00:08:36.583734 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:08:39.579964 kubelet[3181]: E1108 00:08:39.579684 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:08:39.581149 containerd[1717]: time="2025-11-08T00:08:39.580788377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:08:39.899793 containerd[1717]: time="2025-11-08T00:08:39.899624713Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:39.907012 containerd[1717]: time="2025-11-08T00:08:39.906838020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:08:39.907012 containerd[1717]: time="2025-11-08T00:08:39.906917939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:08:39.907224 kubelet[3181]: E1108 00:08:39.907172 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:08:39.907272 kubelet[3181]: E1108 00:08:39.907224 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:08:39.908030 kubelet[3181]: E1108 00:08:39.907335 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:39.908960 containerd[1717]: time="2025-11-08T00:08:39.908867856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:08:40.222380 containerd[1717]: time="2025-11-08T00:08:40.221881802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:40.228174 containerd[1717]: time="2025-11-08T00:08:40.228054711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:08:40.228174 containerd[1717]: time="2025-11-08T00:08:40.228122791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:08:40.228541 kubelet[3181]: E1108 00:08:40.228493 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:08:40.228602 kubelet[3181]: E1108 00:08:40.228548 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:08:40.228651 kubelet[3181]: E1108 00:08:40.228631 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:40.228713 kubelet[3181]: E1108 00:08:40.228677 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:08:43.578971 kubelet[3181]: E1108 00:08:43.578890 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:08:45.578861 kubelet[3181]: E1108 00:08:45.578179 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:08:45.579265 containerd[1717]: time="2025-11-08T00:08:45.578741706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:45.868919 containerd[1717]: time="2025-11-08T00:08:45.868790415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:45.873105 containerd[1717]: time="2025-11-08T00:08:45.873001047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:45.873105 containerd[1717]: time="2025-11-08T00:08:45.873057087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:45.873329 kubelet[3181]: E1108 00:08:45.873267 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:45.873384 kubelet[3181]: E1108 00:08:45.873340 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:45.873836 kubelet[3181]: E1108 00:08:45.873447 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:45.873836 kubelet[3181]: E1108 00:08:45.873488 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:08:50.579259 containerd[1717]: time="2025-11-08T00:08:50.578770226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:08:50.868295 containerd[1717]: time="2025-11-08T00:08:50.867977701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:50.871609 containerd[1717]: time="2025-11-08T00:08:50.871498813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:08:50.871609 containerd[1717]: time="2025-11-08T00:08:50.871568053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:50.871994 kubelet[3181]: E1108 00:08:50.871948 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:08:50.872289 kubelet[3181]: E1108 00:08:50.872004 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:08:50.872289 kubelet[3181]: E1108 00:08:50.872082 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:50.872289 kubelet[3181]: E1108 00:08:50.872114 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:08:51.582695 kubelet[3181]: E1108 00:08:51.582629 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:08:51.595793 containerd[1717]: time="2025-11-08T00:08:51.595729718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:51.872604 containerd[1717]: time="2025-11-08T00:08:51.872466140Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:51.876852 containerd[1717]: time="2025-11-08T00:08:51.876739171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:51.876852 containerd[1717]: time="2025-11-08T00:08:51.876809491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:51.877201 kubelet[3181]: E1108 00:08:51.877154 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:51.877509 kubelet[3181]: E1108 00:08:51.877206 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:51.877509 kubelet[3181]: E1108 00:08:51.877284 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:51.877509 kubelet[3181]: E1108 00:08:51.877318 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:08:53.580168 containerd[1717]: time="2025-11-08T00:08:53.579917572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:08:53.991892 containerd[1717]: time="2025-11-08T00:08:53.991631254Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:53.995431 containerd[1717]: time="2025-11-08T00:08:53.995311285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:08:53.995431 containerd[1717]: time="2025-11-08T00:08:53.995418485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:08:53.995640 kubelet[3181]: E1108 00:08:53.995560 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:08:53.995640 kubelet[3181]: E1108 00:08:53.995611 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:08:53.996913 kubelet[3181]: E1108 00:08:53.995678 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:53.996913 kubelet[3181]: E1108 00:08:53.995710 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:08:56.582505 containerd[1717]: time="2025-11-08T00:08:56.580476452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:08:56.871507 containerd[1717]: time="2025-11-08T00:08:56.871209469Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:56.875488 containerd[1717]: time="2025-11-08T00:08:56.875425501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:08:56.875881 containerd[1717]: time="2025-11-08T00:08:56.875663661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:08:56.875972 kubelet[3181]: E1108 00:08:56.875921 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:08:56.876264 kubelet[3181]: E1108 00:08:56.875981 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:08:56.876264 kubelet[3181]: E1108 00:08:56.876053 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:56.877062 containerd[1717]: time="2025-11-08T00:08:56.877012378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:08:57.143260 containerd[1717]: time="2025-11-08T00:08:57.143133561Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:57.148452 containerd[1717]: time="2025-11-08T00:08:57.148307071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:08:57.148452 containerd[1717]: time="2025-11-08T00:08:57.148414871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:08:57.149606 kubelet[3181]: E1108 00:08:57.149160 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:08:57.149606 kubelet[3181]: E1108 00:08:57.149221 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:08:57.149606 kubelet[3181]: E1108 00:08:57.149333 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:57.149800 kubelet[3181]: E1108 00:08:57.149379 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:08:57.578100 containerd[1717]: time="2025-11-08T00:08:57.578013108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:57.861953 containerd[1717]: time="2025-11-08T00:08:57.861636378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:57.867063 containerd[1717]: time="2025-11-08T00:08:57.866902649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:57.867063 containerd[1717]: time="2025-11-08T00:08:57.867037448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:57.867243 kubelet[3181]: E1108 00:08:57.867168 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:57.867243 kubelet[3181]: E1108 00:08:57.867211 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:57.867346 kubelet[3181]: E1108 00:08:57.867291 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:57.867346 kubelet[3181]: E1108 00:08:57.867331 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:09:01.577780 kubelet[3181]: E1108 00:09:01.577703 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:09:02.578665 kubelet[3181]: E1108 00:09:02.578563 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:09:05.578534 kubelet[3181]: E1108 00:09:05.578453 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:09:05.579605 kubelet[3181]: E1108 00:09:05.579443 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:09:06.579006 kubelet[3181]: E1108 00:09:06.578954 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:09:12.583469 kubelet[3181]: E1108 00:09:12.583048 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:09:12.584517 kubelet[3181]: E1108 00:09:12.584452 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:09:14.579973 kubelet[3181]: E1108 00:09:14.578382 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:09:16.579829 kubelet[3181]: E1108 00:09:16.578890 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:09:17.580018 kubelet[3181]: E1108 00:09:17.579957 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:09:19.579107 kubelet[3181]: E1108 00:09:19.578753 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:09:21.580089 kubelet[3181]: E1108 00:09:21.578948 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:09:23.580428 kubelet[3181]: E1108 00:09:23.580369 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:09:25.579134 kubelet[3181]: E1108 00:09:25.578794 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:09:28.577950 kubelet[3181]: E1108 00:09:28.577741 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:09:29.577753 kubelet[3181]: E1108 00:09:29.577575 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:09:31.578094 kubelet[3181]: E1108 00:09:31.578038 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:09:33.205254 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:54844.service - OpenSSH per-connection server daemon (10.200.16.10:54844). Nov 8 00:09:33.578948 kubelet[3181]: E1108 00:09:33.578840 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:09:33.580228 kubelet[3181]: E1108 00:09:33.578980 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:09:33.666029 sshd[6153]: Accepted publickey for core from 10.200.16.10 port 54844 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:33.670020 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:33.677673 systemd-logind[1698]: New session 10 of user core. Nov 8 00:09:33.683194 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:09:34.124130 sshd[6153]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:34.129106 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:09:34.129750 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:54844.service: Deactivated successfully. Nov 8 00:09:34.133674 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:09:34.136655 systemd-logind[1698]: Removed session 10. Nov 8 00:09:35.579096 kubelet[3181]: E1108 00:09:35.578967 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:09:37.577679 kubelet[3181]: E1108 00:09:37.577630 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:09:39.213286 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:54858.service - OpenSSH per-connection server daemon (10.200.16.10:54858). Nov 8 00:09:39.577739 kubelet[3181]: E1108 00:09:39.577342 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:09:39.669311 sshd[6169]: Accepted publickey for core from 10.200.16.10 port 54858 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:39.671152 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:39.677362 systemd-logind[1698]: New session 11 of user core. Nov 8 00:09:39.684796 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:09:40.098411 sshd[6169]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:40.103255 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:09:40.104569 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:54858.service: Deactivated successfully. Nov 8 00:09:40.108223 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:09:40.110049 systemd-logind[1698]: Removed session 11. Nov 8 00:09:42.581978 kubelet[3181]: E1108 00:09:42.581149 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:09:44.580981 kubelet[3181]: E1108 00:09:44.580366 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:09:44.580981 kubelet[3181]: E1108 00:09:44.580590 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:09:45.180243 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:33894.service - OpenSSH per-connection server daemon (10.200.16.10:33894). Nov 8 00:09:45.577118 kubelet[3181]: E1108 00:09:45.577057 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:09:45.639921 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 33894 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:45.641440 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:45.648341 systemd-logind[1698]: New session 12 of user core. Nov 8 00:09:45.652183 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:09:46.056579 sshd[6184]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:46.061211 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:09:46.061578 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:33894.service: Deactivated successfully. Nov 8 00:09:46.064443 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:09:46.065982 systemd-logind[1698]: Removed session 12. Nov 8 00:09:48.579745 kubelet[3181]: E1108 00:09:48.579612 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:09:50.579061 kubelet[3181]: E1108 00:09:50.578665 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:09:51.135892 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:59966.service - OpenSSH per-connection server daemon (10.200.16.10:59966). Nov 8 00:09:51.577087 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 59966 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:51.578094 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:51.585469 systemd-logind[1698]: New session 13 of user core. Nov 8 00:09:51.587165 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:09:51.987643 sshd[6197]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:51.995503 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:59966.service: Deactivated successfully. Nov 8 00:09:51.998259 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:09:52.000732 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:09:52.003121 systemd-logind[1698]: Removed session 13. Nov 8 00:09:52.580631 kubelet[3181]: E1108 00:09:52.580056 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:09:54.709728 systemd[1]: run-containerd-runc-k8s.io-0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04-runc.HKrjSo.mount: Deactivated successfully. Nov 8 00:09:57.083211 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:59968.service - OpenSSH per-connection server daemon (10.200.16.10:59968). Nov 8 00:09:57.573184 sshd[6231]: Accepted publickey for core from 10.200.16.10 port 59968 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:57.575260 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:57.578812 kubelet[3181]: E1108 00:09:57.578617 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:09:57.580893 kubelet[3181]: E1108 00:09:57.580749 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:09:57.584956 systemd-logind[1698]: New session 14 of user core. Nov 8 00:09:57.590240 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:09:58.046502 sshd[6231]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:58.051435 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:59968.service: Deactivated successfully. Nov 8 00:09:58.054423 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:09:58.059438 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:09:58.062306 systemd-logind[1698]: Removed session 14. Nov 8 00:09:58.578178 kubelet[3181]: E1108 00:09:58.578130 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:10:00.578814 kubelet[3181]: E1108 00:10:00.578071 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:10:02.580557 kubelet[3181]: E1108 00:10:02.580507 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:10:02.581758 kubelet[3181]: E1108 00:10:02.581414 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:10:03.126405 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:41008.service - OpenSSH per-connection server daemon (10.200.16.10:41008). Nov 8 00:10:03.577870 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 41008 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:03.580234 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:03.586334 systemd-logind[1698]: New session 15 of user core. Nov 8 00:10:03.592134 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:10:03.998446 sshd[6251]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:04.006080 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:41008.service: Deactivated successfully. Nov 8 00:10:04.006272 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:10:04.011823 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:10:04.014129 systemd-logind[1698]: Removed session 15. Nov 8 00:10:06.577769 kubelet[3181]: E1108 00:10:06.577712 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:10:08.579021 kubelet[3181]: E1108 00:10:08.578971 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:10:09.081992 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:41016.service - OpenSSH per-connection server daemon (10.200.16.10:41016). Nov 8 00:10:09.549232 sshd[6272]: Accepted publickey for core from 10.200.16.10 port 41016 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:09.549088 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:09.555675 systemd-logind[1698]: New session 16 of user core. Nov 8 00:10:09.561611 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:10:09.577960 containerd[1717]: time="2025-11-08T00:10:09.577889679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:10:09.872566 containerd[1717]: time="2025-11-08T00:10:09.872351092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:09.876075 containerd[1717]: time="2025-11-08T00:10:09.875903329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:10:09.876075 containerd[1717]: time="2025-11-08T00:10:09.876034448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:10:09.876544 kubelet[3181]: E1108 00:10:09.876353 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:10:09.876544 kubelet[3181]: E1108 00:10:09.876500 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:10:09.877340 kubelet[3181]: E1108 00:10:09.876704 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:09.878088 containerd[1717]: time="2025-11-08T00:10:09.877906406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:10:09.963194 sshd[6272]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:09.967095 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:41016.service: Deactivated successfully. Nov 8 00:10:09.969585 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:10:09.970763 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:10:09.972555 systemd-logind[1698]: Removed session 16. Nov 8 00:10:10.044427 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:39128.service - OpenSSH per-connection server daemon (10.200.16.10:39128). Nov 8 00:10:10.183461 containerd[1717]: time="2025-11-08T00:10:10.183322729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:10.187249 containerd[1717]: time="2025-11-08T00:10:10.187192364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:10:10.187370 containerd[1717]: time="2025-11-08T00:10:10.187322604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:10.187887 kubelet[3181]: E1108 00:10:10.187518 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:10:10.187887 kubelet[3181]: E1108 00:10:10.187565 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:10:10.187887 kubelet[3181]: E1108 00:10:10.187631 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-764b4db9fd-g5pz9_calico-system(967f9b6c-67db-4dea-be69-0b8cc8010676): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:10.188059 kubelet[3181]: E1108 00:10:10.187668 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:10:10.504822 sshd[6286]: Accepted publickey for core from 10.200.16.10 port 39128 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:10.506819 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:10.511059 systemd-logind[1698]: New session 17 of user core. Nov 8 00:10:10.520183 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:10:11.018494 sshd[6286]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:11.024419 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:39128.service: Deactivated successfully. Nov 8 00:10:11.026161 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:10:11.030592 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:10:11.031687 systemd-logind[1698]: Removed session 17. Nov 8 00:10:11.105927 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:39138.service - OpenSSH per-connection server daemon (10.200.16.10:39138). Nov 8 00:10:11.561902 sshd[6297]: Accepted publickey for core from 10.200.16.10 port 39138 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:11.563527 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:11.567434 systemd-logind[1698]: New session 18 of user core. Nov 8 00:10:11.573287 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:10:11.581575 containerd[1717]: time="2025-11-08T00:10:11.580791470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:11.862471 containerd[1717]: time="2025-11-08T00:10:11.862266963Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:11.876549 containerd[1717]: time="2025-11-08T00:10:11.876373698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:11.876549 containerd[1717]: time="2025-11-08T00:10:11.876512298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:11.878966 kubelet[3181]: E1108 00:10:11.876842 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:11.878966 kubelet[3181]: E1108 00:10:11.876886 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:11.878966 kubelet[3181]: E1108 00:10:11.876971 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-847b7fbf74-mcdn7_calico-apiserver(8dcb36b7-7066-4355-aa27-d1ae27c36df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:11.878966 kubelet[3181]: E1108 00:10:11.877003 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:10:12.019490 sshd[6297]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:12.024416 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:39138.service: Deactivated successfully. Nov 8 00:10:12.027665 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:10:12.034089 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:10:12.035061 systemd-logind[1698]: Removed session 18. Nov 8 00:10:14.580690 containerd[1717]: time="2025-11-08T00:10:14.580392108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:10:14.582834 kubelet[3181]: E1108 00:10:14.582584 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:10:14.878338 containerd[1717]: time="2025-11-08T00:10:14.878077652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:14.882456 containerd[1717]: time="2025-11-08T00:10:14.882280325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:10:14.882456 containerd[1717]: time="2025-11-08T00:10:14.882347485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:14.882641 kubelet[3181]: E1108 00:10:14.882543 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:10:14.882641 kubelet[3181]: E1108 00:10:14.882592 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:10:14.882714 kubelet[3181]: E1108 00:10:14.882656 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9wxxn_calico-system(fb0c2d00-8a9d-4218-9dbc-6f07fda31565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:14.882714 kubelet[3181]: E1108 00:10:14.882687 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:10:17.110102 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:39152.service - OpenSSH per-connection server daemon (10.200.16.10:39152). Nov 8 00:10:17.570009 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 39152 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:17.572208 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:17.580522 kubelet[3181]: E1108 00:10:17.579252 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:10:17.584495 containerd[1717]: time="2025-11-08T00:10:17.581091013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:17.581354 systemd-logind[1698]: New session 19 of user core. Nov 8 00:10:17.585142 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:10:17.873970 containerd[1717]: time="2025-11-08T00:10:17.873901063Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:17.878949 containerd[1717]: time="2025-11-08T00:10:17.878876253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:17.879105 containerd[1717]: time="2025-11-08T00:10:17.879031573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:17.880047 kubelet[3181]: E1108 00:10:17.879437 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:17.880047 kubelet[3181]: E1108 00:10:17.879502 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:17.880047 kubelet[3181]: E1108 00:10:17.879569 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7cc89_calico-apiserver(1eab03fd-9695-41da-8445-49749eaa2864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:17.880047 kubelet[3181]: E1108 00:10:17.879603 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:10:17.989808 sshd[6311]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:17.993140 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:10:17.995035 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:39152.service: Deactivated successfully. Nov 8 00:10:17.997419 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:10:17.999252 systemd-logind[1698]: Removed session 19. Nov 8 00:10:22.581419 containerd[1717]: time="2025-11-08T00:10:22.580449336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:10:22.869448 containerd[1717]: time="2025-11-08T00:10:22.869306393Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:22.873470 containerd[1717]: time="2025-11-08T00:10:22.873196386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:10:22.873470 containerd[1717]: time="2025-11-08T00:10:22.873328345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:22.874344 kubelet[3181]: E1108 00:10:22.873855 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:22.874344 kubelet[3181]: E1108 00:10:22.873904 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:22.874344 kubelet[3181]: E1108 00:10:22.873996 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74489dd677-kvxft_calico-system(41d4b3ef-ef6f-40aa-890a-556514760a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:22.874344 kubelet[3181]: E1108 00:10:22.874232 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:10:23.080063 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:35212.service - OpenSSH per-connection server daemon (10.200.16.10:35212). Nov 8 00:10:23.534594 sshd[6340]: Accepted publickey for core from 10.200.16.10 port 35212 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:23.536911 sshd[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:23.542274 systemd-logind[1698]: New session 20 of user core. Nov 8 00:10:23.548266 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:10:23.580024 kubelet[3181]: E1108 00:10:23.579800 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:10:23.954570 sshd[6340]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:23.959017 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:35212.service: Deactivated successfully. Nov 8 00:10:23.962451 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:10:23.965598 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:10:23.968577 systemd-logind[1698]: Removed session 20. Nov 8 00:10:24.581743 kubelet[3181]: E1108 00:10:24.581675 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:10:26.580652 kubelet[3181]: E1108 00:10:26.579296 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:10:27.578420 containerd[1717]: time="2025-11-08T00:10:27.578377150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:10:27.853406 containerd[1717]: time="2025-11-08T00:10:27.853252893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:27.858960 containerd[1717]: time="2025-11-08T00:10:27.857068006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:10:27.858960 containerd[1717]: time="2025-11-08T00:10:27.857190046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:10:27.858960 containerd[1717]: time="2025-11-08T00:10:27.858548723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:10:27.859154 kubelet[3181]: E1108 00:10:27.857378 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:27.859154 kubelet[3181]: E1108 00:10:27.857428 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:27.859154 kubelet[3181]: E1108 00:10:27.857497 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:28.145947 containerd[1717]: time="2025-11-08T00:10:28.144734006Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:28.151312 containerd[1717]: time="2025-11-08T00:10:28.151204394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:10:28.151312 containerd[1717]: time="2025-11-08T00:10:28.151267994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:10:28.152673 kubelet[3181]: E1108 00:10:28.151610 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:28.152673 kubelet[3181]: E1108 00:10:28.152502 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:28.152673 kubelet[3181]: E1108 00:10:28.152630 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8jr45_calico-system(70822f24-312d-4073-b204-5c6b6a26eb84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:28.153172 kubelet[3181]: E1108 00:10:28.153126 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:10:29.045262 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:35218.service - OpenSSH per-connection server daemon (10.200.16.10:35218). Nov 8 00:10:29.494356 sshd[6390]: Accepted publickey for core from 10.200.16.10 port 35218 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:29.495781 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:29.499726 systemd-logind[1698]: New session 21 of user core. Nov 8 00:10:29.509131 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:10:29.903149 sshd[6390]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:29.909168 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:35218.service: Deactivated successfully. Nov 8 00:10:29.913767 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:10:29.918729 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:10:29.919810 systemd-logind[1698]: Removed session 21. Nov 8 00:10:30.580467 containerd[1717]: time="2025-11-08T00:10:30.580272759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:30.862259 containerd[1717]: time="2025-11-08T00:10:30.861911609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:30.865414 containerd[1717]: time="2025-11-08T00:10:30.865341043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:30.865624 containerd[1717]: time="2025-11-08T00:10:30.865450363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:30.866967 kubelet[3181]: E1108 00:10:30.865835 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:30.866967 kubelet[3181]: E1108 00:10:30.865888 3181 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:30.866967 kubelet[3181]: E1108 00:10:30.865972 3181 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d7c9d7554-7phdh_calico-apiserver(aad8189b-54ce-422e-a68f-46b67abadfe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:30.866967 kubelet[3181]: E1108 00:10:30.866004 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:10:31.579245 kubelet[3181]: E1108 00:10:31.578875 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:10:34.995249 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:51896.service - OpenSSH per-connection server daemon (10.200.16.10:51896). Nov 8 00:10:35.446641 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 51896 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:35.448368 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:35.453574 systemd-logind[1698]: New session 22 of user core. Nov 8 00:10:35.459159 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:10:35.859589 sshd[6406]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:35.863301 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:51896.service: Deactivated successfully. Nov 8 00:10:35.866832 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:10:35.868108 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:10:35.869363 systemd-logind[1698]: Removed session 22. Nov 8 00:10:37.580021 kubelet[3181]: E1108 00:10:37.579799 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:10:38.583980 kubelet[3181]: E1108 00:10:38.582112 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:10:38.583980 kubelet[3181]: E1108 00:10:38.582456 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:10:39.578596 kubelet[3181]: E1108 00:10:39.577856 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:10:39.579123 kubelet[3181]: E1108 00:10:39.579062 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:10:40.950206 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:38166.service - OpenSSH per-connection server daemon (10.200.16.10:38166). Nov 8 00:10:41.402234 sshd[6421]: Accepted publickey for core from 10.200.16.10 port 38166 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:41.403297 sshd[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:41.407920 systemd-logind[1698]: New session 23 of user core. Nov 8 00:10:41.413164 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:10:41.815436 sshd[6421]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:41.819368 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:38166.service: Deactivated successfully. Nov 8 00:10:41.824449 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:10:41.829126 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:10:41.830625 systemd-logind[1698]: Removed session 23. Nov 8 00:10:45.579970 kubelet[3181]: E1108 00:10:45.579901 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:10:46.579986 kubelet[3181]: E1108 00:10:46.579626 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:10:46.903739 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:38174.service - OpenSSH per-connection server daemon (10.200.16.10:38174). Nov 8 00:10:47.353597 sshd[6434]: Accepted publickey for core from 10.200.16.10 port 38174 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:47.354821 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:47.359292 systemd-logind[1698]: New session 24 of user core. Nov 8 00:10:47.370152 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:10:47.768184 sshd[6434]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:47.772440 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:38174.service: Deactivated successfully. Nov 8 00:10:47.776491 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:10:47.781482 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:10:47.782671 systemd-logind[1698]: Removed session 24. Nov 8 00:10:47.859073 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:38184.service - OpenSSH per-connection server daemon (10.200.16.10:38184). Nov 8 00:10:48.306309 sshd[6447]: Accepted publickey for core from 10.200.16.10 port 38184 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:48.308585 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:48.313547 systemd-logind[1698]: New session 25 of user core. Nov 8 00:10:48.324823 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:10:48.801665 sshd[6447]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:48.810735 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:10:48.811407 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:38184.service: Deactivated successfully. Nov 8 00:10:48.815523 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:10:48.819509 systemd-logind[1698]: Removed session 25. Nov 8 00:10:48.888293 systemd[1]: Started sshd@23-10.200.20.15:22-10.200.16.10:38186.service - OpenSSH per-connection server daemon (10.200.16.10:38186). Nov 8 00:10:49.321410 sshd[6458]: Accepted publickey for core from 10.200.16.10 port 38186 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:49.324335 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:49.331128 systemd-logind[1698]: New session 26 of user core. Nov 8 00:10:49.337186 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:10:50.226462 sshd[6458]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:50.231970 systemd[1]: sshd@23-10.200.20.15:22-10.200.16.10:38186.service: Deactivated successfully. Nov 8 00:10:50.236806 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:10:50.238512 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:10:50.241108 systemd-logind[1698]: Removed session 26. Nov 8 00:10:50.330259 systemd[1]: Started sshd@24-10.200.20.15:22-10.200.16.10:43560.service - OpenSSH per-connection server daemon (10.200.16.10:43560). Nov 8 00:10:50.581727 kubelet[3181]: E1108 00:10:50.581674 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:10:50.582193 kubelet[3181]: E1108 00:10:50.581786 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:10:50.822508 sshd[6476]: Accepted publickey for core from 10.200.16.10 port 43560 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:50.824152 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:50.831718 systemd-logind[1698]: New session 27 of user core. Nov 8 00:10:50.836186 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:10:51.459477 sshd[6476]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:51.465110 systemd[1]: sshd@24-10.200.20.15:22-10.200.16.10:43560.service: Deactivated successfully. Nov 8 00:10:51.468898 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:10:51.471758 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:10:51.474601 systemd-logind[1698]: Removed session 27. Nov 8 00:10:51.555266 systemd[1]: Started sshd@25-10.200.20.15:22-10.200.16.10:43568.service - OpenSSH per-connection server daemon (10.200.16.10:43568). Nov 8 00:10:51.578779 kubelet[3181]: E1108 00:10:51.577374 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:10:52.055258 sshd[6489]: Accepted publickey for core from 10.200.16.10 port 43568 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:52.056584 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:52.064647 systemd-logind[1698]: New session 28 of user core. Nov 8 00:10:52.068191 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 8 00:10:52.520493 sshd[6489]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:52.526607 systemd[1]: sshd@25-10.200.20.15:22-10.200.16.10:43568.service: Deactivated successfully. Nov 8 00:10:52.529028 systemd[1]: session-28.scope: Deactivated successfully. Nov 8 00:10:52.532842 systemd-logind[1698]: Session 28 logged out. Waiting for processes to exit. Nov 8 00:10:52.535256 systemd-logind[1698]: Removed session 28. Nov 8 00:10:53.577867 kubelet[3181]: E1108 00:10:53.577802 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:10:53.578599 kubelet[3181]: E1108 00:10:53.578274 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:10:54.698055 systemd[1]: run-containerd-runc-k8s.io-0713b594ed0af21be3dba9a1db02d536f1bd2b88760bc001a82fabaa4b911f04-runc.Q4bUae.mount: Deactivated successfully. Nov 8 00:10:57.578616 kubelet[3181]: E1108 00:10:57.578568 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:10:57.601126 systemd[1]: Started sshd@26-10.200.20.15:22-10.200.16.10:43578.service - OpenSSH per-connection server daemon (10.200.16.10:43578). Nov 8 00:10:58.017831 sshd[6525]: Accepted publickey for core from 10.200.16.10 port 43578 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:58.019288 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:58.026346 systemd-logind[1698]: New session 29 of user core. Nov 8 00:10:58.030125 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 8 00:10:58.413439 sshd[6525]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:58.417671 systemd[1]: sshd@26-10.200.20.15:22-10.200.16.10:43578.service: Deactivated successfully. Nov 8 00:10:58.422108 systemd[1]: session-29.scope: Deactivated successfully. Nov 8 00:10:58.425683 systemd-logind[1698]: Session 29 logged out. Waiting for processes to exit. Nov 8 00:10:58.428775 systemd-logind[1698]: Removed session 29. Nov 8 00:11:01.578551 kubelet[3181]: E1108 00:11:01.578369 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:11:01.579651 kubelet[3181]: E1108 00:11:01.579492 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:11:03.512255 systemd[1]: Started sshd@27-10.200.20.15:22-10.200.16.10:52546.service - OpenSSH per-connection server daemon (10.200.16.10:52546). Nov 8 00:11:04.004981 sshd[6540]: Accepted publickey for core from 10.200.16.10 port 52546 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:04.006927 sshd[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:04.012470 systemd-logind[1698]: New session 30 of user core. Nov 8 00:11:04.018178 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 8 00:11:04.437784 sshd[6540]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:04.445094 systemd[1]: sshd@27-10.200.20.15:22-10.200.16.10:52546.service: Deactivated successfully. Nov 8 00:11:04.447202 systemd[1]: session-30.scope: Deactivated successfully. Nov 8 00:11:04.448232 systemd-logind[1698]: Session 30 logged out. Waiting for processes to exit. Nov 8 00:11:04.449581 systemd-logind[1698]: Removed session 30. Nov 8 00:11:05.578178 kubelet[3181]: E1108 00:11:05.578123 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:11:05.579697 kubelet[3181]: E1108 00:11:05.579646 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:11:06.583650 kubelet[3181]: E1108 00:11:06.583021 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:11:07.578139 kubelet[3181]: E1108 00:11:07.577269 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:11:08.578295 kubelet[3181]: E1108 00:11:08.578219 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:11:09.515098 systemd[1]: Started sshd@28-10.200.20.15:22-10.200.16.10:52560.service - OpenSSH per-connection server daemon (10.200.16.10:52560). Nov 8 00:11:09.979881 sshd[6555]: Accepted publickey for core from 10.200.16.10 port 52560 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:09.984596 sshd[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:09.990368 systemd-logind[1698]: New session 31 of user core. Nov 8 00:11:09.994871 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 8 00:11:10.395000 sshd[6555]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:10.400197 systemd-logind[1698]: Session 31 logged out. Waiting for processes to exit. Nov 8 00:11:10.403010 systemd[1]: sshd@28-10.200.20.15:22-10.200.16.10:52560.service: Deactivated successfully. Nov 8 00:11:10.406752 systemd[1]: session-31.scope: Deactivated successfully. Nov 8 00:11:10.411277 systemd-logind[1698]: Removed session 31. Nov 8 00:11:12.580615 kubelet[3181]: E1108 00:11:12.580207 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:11:14.582790 kubelet[3181]: E1108 00:11:14.582600 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:11:15.481262 systemd[1]: Started sshd@29-10.200.20.15:22-10.200.16.10:33286.service - OpenSSH per-connection server daemon (10.200.16.10:33286). Nov 8 00:11:15.894414 sshd[6569]: Accepted publickey for core from 10.200.16.10 port 33286 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:15.896340 sshd[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:15.900299 systemd-logind[1698]: New session 32 of user core. Nov 8 00:11:15.910138 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 8 00:11:16.289034 sshd[6569]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:16.292744 systemd[1]: sshd@29-10.200.20.15:22-10.200.16.10:33286.service: Deactivated successfully. Nov 8 00:11:16.297586 systemd[1]: session-32.scope: Deactivated successfully. Nov 8 00:11:16.298863 systemd-logind[1698]: Session 32 logged out. Waiting for processes to exit. Nov 8 00:11:16.299916 systemd-logind[1698]: Removed session 32. Nov 8 00:11:17.577686 kubelet[3181]: E1108 00:11:17.577630 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:11:18.583611 kubelet[3181]: E1108 00:11:18.583302 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:11:18.584398 kubelet[3181]: E1108 00:11:18.583732 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:11:19.577952 kubelet[3181]: E1108 00:11:19.577806 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:11:21.374306 systemd[1]: Started sshd@30-10.200.20.15:22-10.200.16.10:47736.service - OpenSSH per-connection server daemon (10.200.16.10:47736). Nov 8 00:11:21.578570 kubelet[3181]: E1108 00:11:21.578176 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:11:21.787539 sshd[6582]: Accepted publickey for core from 10.200.16.10 port 47736 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:21.789002 sshd[6582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:21.794033 systemd-logind[1698]: New session 33 of user core. Nov 8 00:11:21.799127 systemd[1]: Started session-33.scope - Session 33 of User core. Nov 8 00:11:22.171905 sshd[6582]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:22.176569 systemd-logind[1698]: Session 33 logged out. Waiting for processes to exit. Nov 8 00:11:22.176894 systemd[1]: sshd@30-10.200.20.15:22-10.200.16.10:47736.service: Deactivated successfully. Nov 8 00:11:22.179362 systemd[1]: session-33.scope: Deactivated successfully. Nov 8 00:11:22.181775 systemd-logind[1698]: Removed session 33. Nov 8 00:11:25.579310 kubelet[3181]: E1108 00:11:25.579253 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:11:27.261276 systemd[1]: Started sshd@31-10.200.20.15:22-10.200.16.10:47748.service - OpenSSH per-connection server daemon (10.200.16.10:47748). Nov 8 00:11:27.580189 kubelet[3181]: E1108 00:11:27.580115 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:11:27.721455 sshd[6617]: Accepted publickey for core from 10.200.16.10 port 47748 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:27.723654 sshd[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:27.731448 systemd-logind[1698]: New session 34 of user core. Nov 8 00:11:27.738164 systemd[1]: Started session-34.scope - Session 34 of User core. Nov 8 00:11:28.145092 sshd[6617]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:28.151221 systemd-logind[1698]: Session 34 logged out. Waiting for processes to exit. Nov 8 00:11:28.151736 systemd[1]: sshd@31-10.200.20.15:22-10.200.16.10:47748.service: Deactivated successfully. Nov 8 00:11:28.155930 systemd[1]: session-34.scope: Deactivated successfully. Nov 8 00:11:28.159990 systemd-logind[1698]: Removed session 34. Nov 8 00:11:29.579459 kubelet[3181]: E1108 00:11:29.579400 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:11:29.580836 kubelet[3181]: E1108 00:11:29.580792 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:11:32.580693 kubelet[3181]: E1108 00:11:32.580629 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:11:33.235247 systemd[1]: Started sshd@32-10.200.20.15:22-10.200.16.10:45234.service - OpenSSH per-connection server daemon (10.200.16.10:45234). Nov 8 00:11:33.578083 kubelet[3181]: E1108 00:11:33.578030 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:11:33.687332 sshd[6632]: Accepted publickey for core from 10.200.16.10 port 45234 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:33.690286 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:33.696219 systemd-logind[1698]: New session 35 of user core. Nov 8 00:11:33.701410 systemd[1]: Started session-35.scope - Session 35 of User core. Nov 8 00:11:34.115919 sshd[6632]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:34.121645 systemd[1]: sshd@32-10.200.20.15:22-10.200.16.10:45234.service: Deactivated successfully. Nov 8 00:11:34.124867 systemd[1]: session-35.scope: Deactivated successfully. Nov 8 00:11:34.126197 systemd-logind[1698]: Session 35 logged out. Waiting for processes to exit. Nov 8 00:11:34.127516 systemd-logind[1698]: Removed session 35. Nov 8 00:11:36.578963 kubelet[3181]: E1108 00:11:36.578907 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:11:36.579382 kubelet[3181]: E1108 00:11:36.579081 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:11:38.583438 kubelet[3181]: E1108 00:11:38.583080 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:11:39.210236 systemd[1]: Started sshd@33-10.200.20.15:22-10.200.16.10:45240.service - OpenSSH per-connection server daemon (10.200.16.10:45240). Nov 8 00:11:39.661339 sshd[6647]: Accepted publickey for core from 10.200.16.10 port 45240 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:39.663013 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:39.667057 systemd-logind[1698]: New session 36 of user core. Nov 8 00:11:39.673175 systemd[1]: Started session-36.scope - Session 36 of User core. Nov 8 00:11:40.064686 sshd[6647]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:40.070116 systemd-logind[1698]: Session 36 logged out. Waiting for processes to exit. Nov 8 00:11:40.070805 systemd[1]: sshd@33-10.200.20.15:22-10.200.16.10:45240.service: Deactivated successfully. Nov 8 00:11:40.076113 systemd[1]: session-36.scope: Deactivated successfully. Nov 8 00:11:40.077377 systemd-logind[1698]: Removed session 36. Nov 8 00:11:40.581152 kubelet[3181]: E1108 00:11:40.581096 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:11:42.583187 kubelet[3181]: E1108 00:11:42.583142 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9wxxn" podUID="fb0c2d00-8a9d-4218-9dbc-6f07fda31565" Nov 8 00:11:45.152993 systemd[1]: Started sshd@34-10.200.20.15:22-10.200.16.10:38262.service - OpenSSH per-connection server daemon (10.200.16.10:38262). Nov 8 00:11:45.615632 sshd[6659]: Accepted publickey for core from 10.200.16.10 port 38262 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:45.616633 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:45.624673 systemd-logind[1698]: New session 37 of user core. Nov 8 00:11:45.629140 systemd[1]: Started session-37.scope - Session 37 of User core. Nov 8 00:11:46.027176 sshd[6659]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:46.032450 systemd[1]: sshd@34-10.200.20.15:22-10.200.16.10:38262.service: Deactivated successfully. Nov 8 00:11:46.036929 systemd[1]: session-37.scope: Deactivated successfully. Nov 8 00:11:46.038052 systemd-logind[1698]: Session 37 logged out. Waiting for processes to exit. Nov 8 00:11:46.041328 systemd-logind[1698]: Removed session 37. Nov 8 00:11:46.580622 kubelet[3181]: E1108 00:11:46.580141 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847b7fbf74-mcdn7" podUID="8dcb36b7-7066-4355-aa27-d1ae27c36df5" Nov 8 00:11:47.577954 kubelet[3181]: E1108 00:11:47.577403 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7phdh" podUID="aad8189b-54ce-422e-a68f-46b67abadfe8" Nov 8 00:11:49.578565 kubelet[3181]: E1108 00:11:49.578517 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8jr45" podUID="70822f24-312d-4073-b204-5c6b6a26eb84" Nov 8 00:11:50.580134 kubelet[3181]: E1108 00:11:50.579666 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74489dd677-kvxft" podUID="41d4b3ef-ef6f-40aa-890a-556514760a53" Nov 8 00:11:51.119585 systemd[1]: Started sshd@35-10.200.20.15:22-10.200.16.10:58252.service - OpenSSH per-connection server daemon (10.200.16.10:58252). Nov 8 00:11:51.581961 sshd[6672]: Accepted publickey for core from 10.200.16.10 port 58252 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:11:51.582340 kubelet[3181]: E1108 00:11:51.582186 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d7c9d7554-7cc89" podUID="1eab03fd-9695-41da-8445-49749eaa2864" Nov 8 00:11:51.582340 kubelet[3181]: E1108 00:11:51.582291 3181 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764b4db9fd-g5pz9" podUID="967f9b6c-67db-4dea-be69-0b8cc8010676" Nov 8 00:11:51.583504 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:51.592223 systemd-logind[1698]: New session 38 of user core. Nov 8 00:11:51.596162 systemd[1]: Started session-38.scope - Session 38 of User core. Nov 8 00:11:52.011683 sshd[6672]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:52.016072 systemd-logind[1698]: Session 38 logged out. Waiting for processes to exit. Nov 8 00:11:52.016626 systemd[1]: sshd@35-10.200.20.15:22-10.200.16.10:58252.service: Deactivated successfully. Nov 8 00:11:52.018816 systemd[1]: session-38.scope: Deactivated successfully. Nov 8 00:11:52.020505 systemd-logind[1698]: Removed session 38.