Nov 8 00:06:59.882548 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:06:59.882572 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:06:59.882585 kernel: KASLR enabled Nov 8 00:06:59.882593 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 8 00:06:59.882599 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Nov 8 00:06:59.882647 kernel: random: crng init done Nov 8 00:06:59.882657 kernel: ACPI: Early table checksum verification disabled Nov 8 00:06:59.882663 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 8 00:06:59.882670 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:06:59.882678 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882684 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882690 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882696 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882702 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882710 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882718 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882724 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882731 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:06:59.882737 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:06:59.882744 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 8 00:06:59.882750 kernel: NUMA: Failed to initialise from firmware Nov 8 00:06:59.882757 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 8 00:06:59.882763 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Nov 8 00:06:59.882769 kernel: Zone ranges: Nov 8 00:06:59.882776 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 8 00:06:59.882784 kernel: DMA32 empty Nov 8 00:06:59.882790 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 8 00:06:59.882796 kernel: Movable zone start for each node Nov 8 00:06:59.882802 kernel: Early memory node ranges Nov 8 00:06:59.882809 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Nov 8 00:06:59.882815 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 8 00:06:59.882821 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 8 00:06:59.882828 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 8 00:06:59.882834 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 8 00:06:59.882840 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 8 00:06:59.882846 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 8 00:06:59.882853 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 8 00:06:59.882861 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 8 00:06:59.882867 kernel: psci: probing for conduit method from ACPI. Nov 8 00:06:59.882874 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:06:59.882883 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:06:59.882890 kernel: psci: Trusted OS migration not required Nov 8 00:06:59.882897 kernel: psci: SMC Calling Convention v1.1 Nov 8 00:06:59.882905 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 8 00:06:59.882912 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:06:59.882919 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:06:59.882926 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:06:59.882933 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:06:59.882939 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:06:59.882946 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:06:59.882968 kernel: CPU features: detected: Spectre-v4 Nov 8 00:06:59.882975 kernel: CPU features: detected: Spectre-BHB Nov 8 00:06:59.882982 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:06:59.882992 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:06:59.882999 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:06:59.883006 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:06:59.883013 kernel: alternatives: applying boot alternatives Nov 8 00:06:59.883021 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:06:59.883028 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:06:59.883035 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:06:59.883042 kernel: Fallback order for Node 0: 0 Nov 8 00:06:59.883048 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Nov 8 00:06:59.883055 kernel: Policy zone: Normal Nov 8 00:06:59.883062 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:06:59.883070 kernel: software IO TLB: area num 2. Nov 8 00:06:59.883077 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Nov 8 00:06:59.883085 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Nov 8 00:06:59.883092 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:06:59.883098 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:06:59.883106 kernel: rcu: RCU event tracing is enabled. Nov 8 00:06:59.883113 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:06:59.883120 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:06:59.883127 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:06:59.883134 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:06:59.883141 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:06:59.883148 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:06:59.883157 kernel: GICv3: 256 SPIs implemented Nov 8 00:06:59.883164 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:06:59.883171 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:06:59.883177 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 8 00:06:59.883184 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 8 00:06:59.883191 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 8 00:06:59.883198 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Nov 8 00:06:59.883205 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Nov 8 00:06:59.883212 kernel: GICv3: using LPI property table @0x00000001000e0000 Nov 8 00:06:59.883219 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Nov 8 00:06:59.883226 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:06:59.883234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:06:59.883241 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:06:59.883248 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:06:59.883255 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:06:59.883262 kernel: Console: colour dummy device 80x25 Nov 8 00:06:59.883269 kernel: ACPI: Core revision 20230628 Nov 8 00:06:59.883277 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:06:59.883284 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:06:59.883291 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:06:59.883298 kernel: landlock: Up and running. Nov 8 00:06:59.883306 kernel: SELinux: Initializing. Nov 8 00:06:59.883313 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:06:59.883320 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:06:59.883327 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:06:59.883335 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:06:59.883342 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:06:59.883349 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:06:59.883356 kernel: Platform MSI: ITS@0x8080000 domain created Nov 8 00:06:59.883363 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 8 00:06:59.883371 kernel: Remapping and enabling EFI services. Nov 8 00:06:59.883378 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:06:59.883385 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:06:59.883393 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 8 00:06:59.883400 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Nov 8 00:06:59.883407 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:06:59.883414 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:06:59.883421 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:06:59.883428 kernel: SMP: Total of 2 processors activated. Nov 8 00:06:59.883435 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:06:59.883443 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:06:59.883451 kernel: CPU features: detected: Common not Private translations Nov 8 00:06:59.883464 kernel: CPU features: detected: CRC32 instructions Nov 8 00:06:59.883473 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 8 00:06:59.883480 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:06:59.883488 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:06:59.883495 kernel: CPU features: detected: Privileged Access Never Nov 8 00:06:59.883503 kernel: CPU features: detected: RAS Extension Support Nov 8 00:06:59.883512 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 8 00:06:59.883520 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:06:59.883527 kernel: alternatives: applying system-wide alternatives Nov 8 00:06:59.883534 kernel: devtmpfs: initialized Nov 8 00:06:59.883542 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:06:59.883550 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:06:59.883557 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:06:59.883564 kernel: SMBIOS 3.0.0 present. Nov 8 00:06:59.883573 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 8 00:06:59.883581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:06:59.883589 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:06:59.883596 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:06:59.883612 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:06:59.883620 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:06:59.883627 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Nov 8 00:06:59.883635 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:06:59.883643 kernel: cpuidle: using governor menu Nov 8 00:06:59.883653 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:06:59.883660 kernel: ASID allocator initialised with 32768 entries Nov 8 00:06:59.883668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:06:59.883675 kernel: Serial: AMBA PL011 UART driver Nov 8 00:06:59.883683 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:06:59.883690 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:06:59.883698 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:06:59.883705 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:06:59.883712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:06:59.883721 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:06:59.883729 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:06:59.883736 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:06:59.883744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:06:59.883751 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:06:59.883758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:06:59.883766 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:06:59.883773 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:06:59.883781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:06:59.883790 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:06:59.883797 kernel: ACPI: Interpreter enabled Nov 8 00:06:59.883805 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:06:59.883812 kernel: ACPI: MCFG table detected, 1 entries Nov 8 00:06:59.883820 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:06:59.883827 kernel: printk: console [ttyAMA0] enabled Nov 8 00:06:59.883834 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:06:59.884089 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:06:59.884178 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:06:59.884243 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:06:59.884307 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 8 00:06:59.884370 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 8 00:06:59.884380 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 8 00:06:59.884387 kernel: PCI host bridge to bus 0000:00 Nov 8 00:06:59.884460 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 8 00:06:59.884524 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 8 00:06:59.884584 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 8 00:06:59.884660 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:06:59.884743 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 8 00:06:59.884822 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Nov 8 00:06:59.884891 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Nov 8 00:06:59.885006 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Nov 8 00:06:59.885107 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.885174 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Nov 8 00:06:59.885248 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.885313 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Nov 8 00:06:59.885385 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.885450 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Nov 8 00:06:59.885526 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.885592 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Nov 8 00:06:59.885712 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.885783 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Nov 8 00:06:59.885856 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.885923 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Nov 8 00:06:59.886026 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.886107 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Nov 8 00:06:59.886181 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.886246 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Nov 8 00:06:59.886318 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:06:59.886383 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Nov 8 00:06:59.886463 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Nov 8 00:06:59.886539 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Nov 8 00:06:59.886627 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:06:59.886702 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Nov 8 00:06:59.886772 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:06:59.886841 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Nov 8 00:06:59.886917 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:06:59.887012 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Nov 8 00:06:59.887092 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 8 00:06:59.887165 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Nov 8 00:06:59.887240 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Nov 8 00:06:59.887318 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 8 00:06:59.887386 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Nov 8 00:06:59.887465 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:06:59.887534 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Nov 8 00:06:59.887621 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Nov 8 00:06:59.887699 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 8 00:06:59.887771 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Nov 8 00:06:59.887843 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Nov 8 00:06:59.887923 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:06:59.888045 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Nov 8 00:06:59.888124 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Nov 8 00:06:59.888191 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Nov 8 00:06:59.888258 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 8 00:06:59.888324 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 8 00:06:59.888390 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 8 00:06:59.888467 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 8 00:06:59.888544 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 8 00:06:59.888649 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 8 00:06:59.888731 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 8 00:06:59.888797 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 8 00:06:59.888873 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 8 00:06:59.888941 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 8 00:06:59.889028 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 8 00:06:59.889107 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 8 00:06:59.889178 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 8 00:06:59.889255 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 8 00:06:59.889321 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 8 00:06:59.889390 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:06:59.889456 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 8 00:06:59.889522 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 8 00:06:59.889595 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:06:59.889682 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 8 00:06:59.889757 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 8 00:06:59.889827 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:06:59.889893 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 8 00:06:59.890032 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 8 00:06:59.890111 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:06:59.890176 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 8 00:06:59.890269 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 8 00:06:59.890340 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Nov 8 00:06:59.890406 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:06:59.890471 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Nov 8 00:06:59.890535 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:06:59.890600 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Nov 8 00:06:59.890717 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:06:59.890788 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Nov 8 00:06:59.890854 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:06:59.890919 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Nov 8 00:06:59.891019 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:06:59.891087 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Nov 8 00:06:59.891151 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:06:59.891229 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Nov 8 00:06:59.891296 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:06:59.891371 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Nov 8 00:06:59.891443 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:06:59.891516 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Nov 8 00:06:59.891592 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:06:59.891712 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Nov 8 00:06:59.891788 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Nov 8 00:06:59.891854 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Nov 8 00:06:59.891919 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:06:59.892121 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Nov 8 00:06:59.892201 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:06:59.892268 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Nov 8 00:06:59.892331 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:06:59.892403 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Nov 8 00:06:59.892474 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 8 00:06:59.892539 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Nov 8 00:06:59.892612 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 8 00:06:59.892697 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Nov 8 00:06:59.892764 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 8 00:06:59.892829 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Nov 8 00:06:59.892893 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 8 00:06:59.892977 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Nov 8 00:06:59.893052 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 8 00:06:59.893118 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Nov 8 00:06:59.893183 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Nov 8 00:06:59.893253 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Nov 8 00:06:59.893325 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Nov 8 00:06:59.893394 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:06:59.893461 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Nov 8 00:06:59.893525 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:06:59.893592 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 8 00:06:59.893699 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 8 00:06:59.893768 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:06:59.893839 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Nov 8 00:06:59.893908 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:06:59.896089 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 8 00:06:59.896181 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 8 00:06:59.896250 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:06:59.896328 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Nov 8 00:06:59.896404 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Nov 8 00:06:59.896469 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:06:59.896538 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 8 00:06:59.896626 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 8 00:06:59.896699 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:06:59.896772 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Nov 8 00:06:59.896848 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:06:59.896913 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 8 00:06:59.896992 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 8 00:06:59.897063 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:06:59.897136 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Nov 8 00:06:59.897208 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Nov 8 00:06:59.897275 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:06:59.897346 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 8 00:06:59.897412 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 8 00:06:59.897482 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:06:59.897563 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Nov 8 00:06:59.897684 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Nov 8 00:06:59.897760 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:06:59.897834 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 8 00:06:59.897900 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 8 00:06:59.899660 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:06:59.899761 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Nov 8 00:06:59.899832 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Nov 8 00:06:59.899901 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Nov 8 00:06:59.899986 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:06:59.900056 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 8 00:06:59.900128 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 8 00:06:59.900198 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:06:59.900266 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:06:59.900331 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 8 00:06:59.900401 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 8 00:06:59.900468 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:06:59.900538 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:06:59.900645 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 8 00:06:59.900737 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 8 00:06:59.900811 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:06:59.900884 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 8 00:06:59.900943 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 8 00:06:59.901017 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 8 00:06:59.901108 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 8 00:06:59.901173 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 8 00:06:59.901244 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:06:59.901320 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 8 00:06:59.901392 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 8 00:06:59.901468 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:06:59.901538 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 8 00:06:59.901601 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 8 00:06:59.901687 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:06:59.901763 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 8 00:06:59.901825 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 8 00:06:59.901909 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:06:59.904162 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 8 00:06:59.904248 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 8 00:06:59.904309 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:06:59.904389 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 8 00:06:59.904450 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 8 00:06:59.904513 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:06:59.904581 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 8 00:06:59.904676 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 8 00:06:59.904740 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:06:59.904810 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 8 00:06:59.904870 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 8 00:06:59.904929 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:06:59.905042 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 8 00:06:59.905116 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 8 00:06:59.905188 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:06:59.905198 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 8 00:06:59.905207 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 8 00:06:59.905215 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 8 00:06:59.905224 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 8 00:06:59.905233 kernel: iommu: Default domain type: Translated Nov 8 00:06:59.905240 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:06:59.905255 kernel: efivars: Registered efivars operations Nov 8 00:06:59.905264 kernel: vgaarb: loaded Nov 8 00:06:59.905274 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:06:59.905282 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:06:59.905290 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:06:59.905298 kernel: pnp: PnP ACPI init Nov 8 00:06:59.905372 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 8 00:06:59.905384 kernel: pnp: PnP ACPI: found 1 devices Nov 8 00:06:59.905392 kernel: NET: Registered PF_INET protocol family Nov 8 00:06:59.905400 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:06:59.905410 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:06:59.905418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:06:59.905426 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:06:59.905434 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:06:59.905442 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:06:59.905451 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:06:59.905459 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:06:59.905467 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:06:59.905544 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 8 00:06:59.905557 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:06:59.905565 kernel: kvm [1]: HYP mode not available Nov 8 00:06:59.905572 kernel: Initialise system trusted keyrings Nov 8 00:06:59.905580 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:06:59.905588 kernel: Key type asymmetric registered Nov 8 00:06:59.905596 kernel: Asymmetric key parser 'x509' registered Nov 8 00:06:59.905639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:06:59.905648 kernel: io scheduler mq-deadline registered Nov 8 00:06:59.905656 kernel: io scheduler kyber registered Nov 8 00:06:59.905667 kernel: io scheduler bfq registered Nov 8 00:06:59.905676 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 8 00:06:59.905761 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 8 00:06:59.905844 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 8 00:06:59.905913 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.907553 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 8 00:06:59.907665 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 8 00:06:59.907744 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.907815 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 8 00:06:59.907881 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 8 00:06:59.907948 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.908888 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 8 00:06:59.909004 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 8 00:06:59.909081 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.909149 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 8 00:06:59.909230 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 8 00:06:59.909312 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.909393 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 8 00:06:59.909469 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 8 00:06:59.909559 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.909652 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 8 00:06:59.909731 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 8 00:06:59.909809 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.909892 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 8 00:06:59.910057 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 8 00:06:59.910146 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.910164 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 8 00:06:59.910244 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 8 00:06:59.910321 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 8 00:06:59.910392 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:06:59.910407 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 8 00:06:59.910418 kernel: ACPI: button: Power Button [PWRB] Nov 8 00:06:59.910426 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 8 00:06:59.910503 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 8 00:06:59.910576 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 8 00:06:59.910587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:06:59.910595 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 8 00:06:59.910677 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 8 00:06:59.910688 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 8 00:06:59.910696 kernel: thunder_xcv, ver 1.0 Nov 8 00:06:59.910707 kernel: thunder_bgx, ver 1.0 Nov 8 00:06:59.910715 kernel: nicpf, ver 1.0 Nov 8 00:06:59.910723 kernel: nicvf, ver 1.0 Nov 8 00:06:59.910803 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:06:59.910867 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:06:59 UTC (1762560419) Nov 8 00:06:59.910878 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:06:59.910886 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 8 00:06:59.910894 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:06:59.910904 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:06:59.910912 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:06:59.910920 kernel: Segment Routing with IPv6 Nov 8 00:06:59.910928 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:06:59.910936 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:06:59.910944 kernel: Key type dns_resolver registered Nov 8 00:06:59.910963 kernel: registered taskstats version 1 Nov 8 00:06:59.910983 kernel: Loading compiled-in X.509 certificates Nov 8 00:06:59.910992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:06:59.911003 kernel: Key type .fscrypt registered Nov 8 00:06:59.911011 kernel: Key type fscrypt-provisioning registered Nov 8 00:06:59.911018 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:06:59.911026 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:06:59.911034 kernel: ima: No architecture policies found Nov 8 00:06:59.911042 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:06:59.911050 kernel: clk: Disabling unused clocks Nov 8 00:06:59.911058 kernel: Freeing unused kernel memory: 39424K Nov 8 00:06:59.911066 kernel: Run /init as init process Nov 8 00:06:59.911075 kernel: with arguments: Nov 8 00:06:59.911083 kernel: /init Nov 8 00:06:59.911091 kernel: with environment: Nov 8 00:06:59.911098 kernel: HOME=/ Nov 8 00:06:59.911106 kernel: TERM=linux Nov 8 00:06:59.911116 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:06:59.911126 systemd[1]: Detected virtualization kvm. Nov 8 00:06:59.911134 systemd[1]: Detected architecture arm64. Nov 8 00:06:59.911150 systemd[1]: Running in initrd. Nov 8 00:06:59.911158 systemd[1]: No hostname configured, using default hostname. Nov 8 00:06:59.911166 systemd[1]: Hostname set to . Nov 8 00:06:59.911175 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:06:59.911183 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:06:59.911191 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:06:59.911200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:06:59.911212 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:06:59.911222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:06:59.911230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:06:59.911239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:06:59.911249 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:06:59.911257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:06:59.911268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:06:59.911278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:06:59.911289 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:06:59.911298 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:06:59.911307 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:06:59.911315 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:06:59.911325 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:06:59.911333 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:06:59.911342 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:06:59.911350 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:06:59.911360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:06:59.911369 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:06:59.911380 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:06:59.911389 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:06:59.911397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:06:59.911405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:06:59.911414 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:06:59.911422 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:06:59.911430 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:06:59.911440 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:06:59.911449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:06:59.911457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:06:59.911490 systemd-journald[236]: Collecting audit messages is disabled. Nov 8 00:06:59.911512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:06:59.911521 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:06:59.911530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:06:59.911540 systemd-journald[236]: Journal started Nov 8 00:06:59.911561 systemd-journald[236]: Runtime Journal (/run/log/journal/d0c4569b7d6845af9a5a8e62cdb1164a) is 8.0M, max 76.6M, 68.6M free. Nov 8 00:06:59.901842 systemd-modules-load[237]: Inserted module 'overlay' Nov 8 00:06:59.913224 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:06:59.915394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:06:59.919349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:06:59.922979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:06:59.925360 systemd-modules-load[237]: Inserted module 'br_netfilter' Nov 8 00:06:59.926102 kernel: Bridge firewalling registered Nov 8 00:06:59.931194 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:06:59.933975 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:06:59.938193 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:06:59.940277 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:06:59.955949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:06:59.957814 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:06:59.969203 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:06:59.970138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:06:59.974466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:06:59.977204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:06:59.987162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:06:59.989717 dracut-cmdline[268]: dracut-dracut-053 Nov 8 00:06:59.992148 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:07:00.019047 systemd-resolved[277]: Positive Trust Anchors: Nov 8 00:07:00.019061 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:07:00.019092 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:07:00.029816 systemd-resolved[277]: Defaulting to hostname 'linux'. Nov 8 00:07:00.031754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:07:00.033024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:07:00.093033 kernel: SCSI subsystem initialized Nov 8 00:07:00.098007 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:07:00.104998 kernel: iscsi: registered transport (tcp) Nov 8 00:07:00.119018 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:07:00.119078 kernel: QLogic iSCSI HBA Driver Nov 8 00:07:00.170119 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:07:00.175251 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:07:00.194018 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:07:00.194110 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:07:00.195069 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:07:00.258014 kernel: raid6: neonx8 gen() 15656 MB/s Nov 8 00:07:00.275017 kernel: raid6: neonx4 gen() 11926 MB/s Nov 8 00:07:00.292018 kernel: raid6: neonx2 gen() 13164 MB/s Nov 8 00:07:00.309038 kernel: raid6: neonx1 gen() 10376 MB/s Nov 8 00:07:00.326008 kernel: raid6: int64x8 gen() 6896 MB/s Nov 8 00:07:00.343023 kernel: raid6: int64x4 gen() 7296 MB/s Nov 8 00:07:00.360010 kernel: raid6: int64x2 gen() 6016 MB/s Nov 8 00:07:00.377023 kernel: raid6: int64x1 gen() 5014 MB/s Nov 8 00:07:00.377115 kernel: raid6: using algorithm neonx8 gen() 15656 MB/s Nov 8 00:07:00.394017 kernel: raid6: .... xor() 11904 MB/s, rmw enabled Nov 8 00:07:00.394097 kernel: raid6: using neon recovery algorithm Nov 8 00:07:00.399150 kernel: xor: measuring software checksum speed Nov 8 00:07:00.399202 kernel: 8regs : 19740 MB/sec Nov 8 00:07:00.399219 kernel: 32regs : 19655 MB/sec Nov 8 00:07:00.400000 kernel: arm64_neon : 26963 MB/sec Nov 8 00:07:00.400040 kernel: xor: using function: arm64_neon (26963 MB/sec) Nov 8 00:07:00.451016 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:07:00.465135 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:07:00.472208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:07:00.486697 systemd-udevd[457]: Using default interface naming scheme 'v255'. Nov 8 00:07:00.490205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:07:00.501757 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:07:00.519467 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Nov 8 00:07:00.555977 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:07:00.562168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:07:00.611570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:07:00.622492 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:07:00.647211 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:07:00.648456 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:07:00.650327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:07:00.651468 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:07:00.658278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:07:00.688825 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:07:00.730075 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:07:00.740790 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:07:00.740839 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:07:00.745446 kernel: ACPI: bus type USB registered Nov 8 00:07:00.745511 kernel: usbcore: registered new interface driver usbfs Nov 8 00:07:00.745523 kernel: usbcore: registered new interface driver hub Nov 8 00:07:00.749998 kernel: usbcore: registered new device driver usb Nov 8 00:07:00.765010 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 8 00:07:00.770632 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:07:00.771478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:00.774642 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:07:00.774821 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 8 00:07:00.776564 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:07:00.776583 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:07:00.776770 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:07:00.777279 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:07:00.784814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:07:00.785729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:00.789379 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:07:00.789539 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:07:00.791050 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:07:00.791158 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:07:00.791264 kernel: hub 1-0:1.0: USB hub found Nov 8 00:07:00.791360 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:07:00.790077 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:00.792972 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:07:00.800356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:00.809695 kernel: hub 2-0:1.0: USB hub found Nov 8 00:07:00.809926 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:07:00.828020 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 8 00:07:00.828240 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 8 00:07:00.828326 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 8 00:07:00.828405 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 8 00:07:00.828483 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:07:00.834977 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:07:00.835032 kernel: GPT:17805311 != 80003071 Nov 8 00:07:00.835042 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:07:00.835052 kernel: GPT:17805311 != 80003071 Nov 8 00:07:00.835070 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:07:00.835079 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:00.835089 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 8 00:07:00.831517 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:00.842129 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:07:00.874022 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (518) Nov 8 00:07:00.875942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:00.881178 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (515) Nov 8 00:07:00.897185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:07:00.897851 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:07:00.906280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:07:00.913454 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:07:00.919233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:07:00.925188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:07:00.933006 disk-uuid[575]: Primary Header is updated. Nov 8 00:07:00.933006 disk-uuid[575]: Secondary Entries is updated. Nov 8 00:07:00.933006 disk-uuid[575]: Secondary Header is updated. Nov 8 00:07:00.942543 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:01.033104 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:07:01.168524 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 8 00:07:01.168608 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 8 00:07:01.168877 kernel: usbcore: registered new interface driver usbhid Nov 8 00:07:01.168897 kernel: usbhid: USB HID core driver Nov 8 00:07:01.274994 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 8 00:07:01.402988 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 8 00:07:01.456033 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 8 00:07:01.954459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:01.955629 disk-uuid[577]: The operation has completed successfully. Nov 8 00:07:02.014168 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:07:02.015012 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:07:02.030321 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:07:02.037051 sh[592]: Success Nov 8 00:07:02.053981 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:07:02.127341 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:07:02.129154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:07:02.135163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:07:02.153389 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:07:02.153477 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:02.153506 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:07:02.153540 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:07:02.155002 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:07:02.160992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:07:02.162997 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:07:02.164871 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:07:02.175203 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:07:02.179274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:07:02.194053 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:02.194115 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:02.194127 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:07:02.199791 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:07:02.199856 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:07:02.213298 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:02.213346 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:07:02.223063 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:07:02.232230 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:07:02.351757 ignition[678]: Ignition 2.19.0 Nov 8 00:07:02.351768 ignition[678]: Stage: fetch-offline Nov 8 00:07:02.351808 ignition[678]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:02.351818 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:02.352698 ignition[678]: parsed url from cmdline: "" Nov 8 00:07:02.352703 ignition[678]: no config URL provided Nov 8 00:07:02.352710 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:07:02.352721 ignition[678]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:07:02.356238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:07:02.352729 ignition[678]: failed to fetch config: resource requires networking Nov 8 00:07:02.352947 ignition[678]: Ignition finished successfully Nov 8 00:07:02.364403 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:07:02.371267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:07:02.391988 systemd-networkd[780]: lo: Link UP Nov 8 00:07:02.392003 systemd-networkd[780]: lo: Gained carrier Nov 8 00:07:02.393630 systemd-networkd[780]: Enumeration completed Nov 8 00:07:02.393753 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:07:02.394189 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:02.394192 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:02.395618 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:02.395622 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:02.396265 systemd-networkd[780]: eth0: Link UP Nov 8 00:07:02.396269 systemd-networkd[780]: eth0: Gained carrier Nov 8 00:07:02.396277 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:02.396456 systemd[1]: Reached target network.target - Network. Nov 8 00:07:02.396788 systemd-networkd[780]: eth1: Link UP Nov 8 00:07:02.396791 systemd-networkd[780]: eth1: Gained carrier Nov 8 00:07:02.396799 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:02.406313 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:07:02.420212 ignition[782]: Ignition 2.19.0 Nov 8 00:07:02.420224 ignition[782]: Stage: fetch Nov 8 00:07:02.420445 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:02.420456 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:02.420562 ignition[782]: parsed url from cmdline: "" Nov 8 00:07:02.420566 ignition[782]: no config URL provided Nov 8 00:07:02.420571 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:07:02.420597 ignition[782]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:07:02.420621 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 8 00:07:02.421488 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:07:02.430206 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:07:02.461060 systemd-networkd[780]: eth0: DHCPv4 address 138.199.234.199/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:07:02.621810 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 8 00:07:02.630269 ignition[782]: GET result: OK Nov 8 00:07:02.630533 ignition[782]: parsing config with SHA512: 268183f5694b972538000c2e05f762bcee3e2757ed0ad1f661602f0e002bbb243dcf028b93936bf7905f5830d18191a803a523d3e3a6f00565a58c75567aec73 Nov 8 00:07:02.637066 unknown[782]: fetched base config from "system" Nov 8 00:07:02.637076 unknown[782]: fetched base config from "system" Nov 8 00:07:02.637554 ignition[782]: fetch: fetch complete Nov 8 00:07:02.637081 unknown[782]: fetched user config from "hetzner" Nov 8 00:07:02.637559 ignition[782]: fetch: fetch passed Nov 8 00:07:02.637671 ignition[782]: Ignition finished successfully Nov 8 00:07:02.641333 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:07:02.646191 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:07:02.660401 ignition[790]: Ignition 2.19.0 Nov 8 00:07:02.660418 ignition[790]: Stage: kargs Nov 8 00:07:02.660708 ignition[790]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:02.660720 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:02.662135 ignition[790]: kargs: kargs passed Nov 8 00:07:02.664741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:07:02.662849 ignition[790]: Ignition finished successfully Nov 8 00:07:02.671312 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:07:02.689056 ignition[796]: Ignition 2.19.0 Nov 8 00:07:02.689690 ignition[796]: Stage: disks Nov 8 00:07:02.689904 ignition[796]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:02.689916 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:02.691484 ignition[796]: disks: disks passed Nov 8 00:07:02.691541 ignition[796]: Ignition finished successfully Nov 8 00:07:02.695039 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:07:02.698926 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:07:02.700062 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:07:02.701040 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:07:02.701684 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:07:02.702696 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:07:02.709219 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:07:02.729515 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:07:02.733941 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:07:02.740456 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:07:02.794004 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:07:02.795416 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:07:02.798098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:07:02.808186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:07:02.813431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:07:02.823176 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (812) Nov 8 00:07:02.821749 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:07:02.823588 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:07:02.830158 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:02.830188 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:02.830200 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:07:02.823646 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:07:02.832249 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:07:02.844249 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:07:02.847745 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:07:02.847777 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:07:02.853677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:07:02.890306 coreos-metadata[814]: Nov 08 00:07:02.890 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 8 00:07:02.892697 coreos-metadata[814]: Nov 08 00:07:02.892 INFO Fetch successful Nov 8 00:07:02.898025 coreos-metadata[814]: Nov 08 00:07:02.897 INFO wrote hostname ci-4081-3-6-n-3f5a11d2fe to /sysroot/etc/hostname Nov 8 00:07:02.901297 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:07:02.905441 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:07:02.911787 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:07:02.917998 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:07:02.923675 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:07:03.034270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:07:03.049146 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:07:03.053148 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:07:03.064045 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:03.084945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:07:03.092467 ignition[928]: INFO : Ignition 2.19.0 Nov 8 00:07:03.092467 ignition[928]: INFO : Stage: mount Nov 8 00:07:03.094882 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:03.094882 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:03.094882 ignition[928]: INFO : mount: mount passed Nov 8 00:07:03.094882 ignition[928]: INFO : Ignition finished successfully Nov 8 00:07:03.097042 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:07:03.105194 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:07:03.152467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:07:03.170667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:07:03.182018 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Nov 8 00:07:03.183988 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:03.184040 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:03.184069 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:07:03.186996 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:07:03.187045 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:07:03.190762 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:07:03.217345 ignition[957]: INFO : Ignition 2.19.0 Nov 8 00:07:03.217345 ignition[957]: INFO : Stage: files Nov 8 00:07:03.218483 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:03.218483 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:03.220048 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:07:03.220048 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:07:03.220048 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:07:03.223762 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:07:03.224837 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:07:03.225925 unknown[957]: wrote ssh authorized keys file for user: core Nov 8 00:07:03.227058 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:07:03.227983 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:07:03.229151 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:07:03.229151 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:07:03.229151 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 8 00:07:03.366624 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:07:03.512979 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:07:03.514121 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:07:03.521733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:07:03.521733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:07:03.521733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:03.521733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:03.521733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:03.521733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 8 00:07:03.742177 systemd-networkd[780]: eth1: Gained IPv6LL Nov 8 00:07:03.820969 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:07:04.318084 systemd-networkd[780]: eth0: Gained IPv6LL Nov 8 00:07:04.457986 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:04.457986 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:07:04.461101 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:07:04.461101 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:07:04.461101 ignition[957]: INFO : files: files passed Nov 8 00:07:04.461101 ignition[957]: INFO : Ignition finished successfully Nov 8 00:07:04.463876 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:07:04.473340 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:07:04.478075 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:07:04.480713 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:07:04.481844 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:07:04.505422 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:07:04.506684 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:07:04.506684 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:07:04.509525 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:07:04.510515 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:07:04.519220 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:07:04.556273 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:07:04.556495 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:07:04.559467 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:07:04.560853 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:07:04.562693 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:07:04.571262 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:07:04.589516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:07:04.597188 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:07:04.612086 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:07:04.613504 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:07:04.614302 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:07:04.614862 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:07:04.616356 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:07:04.618971 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:07:04.620869 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:07:04.621892 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:07:04.622865 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:07:04.624134 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:07:04.625491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:07:04.626649 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:07:04.627812 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:07:04.632087 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:07:04.634215 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:07:04.634921 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:07:04.635112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:07:04.637243 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:07:04.637944 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:07:04.639093 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:07:04.639205 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:07:04.640298 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:07:04.640459 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:07:04.642199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:07:04.642368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:07:04.643318 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:07:04.643461 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:07:04.644349 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:07:04.644511 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:07:04.656058 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:07:04.657764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:07:04.658317 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:07:04.663296 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:07:04.664643 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:07:04.665148 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:07:04.667396 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:07:04.667891 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:07:04.677945 ignition[1009]: INFO : Ignition 2.19.0 Nov 8 00:07:04.677945 ignition[1009]: INFO : Stage: umount Nov 8 00:07:04.682845 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:04.682845 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:04.682845 ignition[1009]: INFO : umount: umount passed Nov 8 00:07:04.682845 ignition[1009]: INFO : Ignition finished successfully Nov 8 00:07:04.679942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:07:04.680086 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:07:04.682617 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:07:04.682997 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:07:04.685567 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:07:04.685627 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:07:04.688105 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:07:04.688167 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:07:04.689794 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:07:04.689842 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:07:04.690520 systemd[1]: Stopped target network.target - Network. Nov 8 00:07:04.691070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:07:04.691118 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:07:04.692692 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:07:04.693873 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:07:04.697053 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:07:04.697830 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:07:04.701018 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:07:04.703501 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:07:04.703584 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:07:04.707096 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:07:04.707155 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:07:04.712234 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:07:04.712302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:07:04.714279 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:07:04.714329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:07:04.716779 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:07:04.718051 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:07:04.726346 systemd-networkd[780]: eth0: DHCPv6 lease lost Nov 8 00:07:04.727512 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:07:04.733120 systemd-networkd[780]: eth1: DHCPv6 lease lost Nov 8 00:07:04.734017 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:07:04.734151 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:07:04.738510 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:07:04.739311 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:07:04.740718 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:07:04.740816 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:07:04.742802 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:07:04.742864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:07:04.743754 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:07:04.743806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:07:04.751334 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:07:04.751877 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:07:04.751936 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:07:04.752714 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:07:04.752807 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:07:04.753451 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:07:04.753489 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:07:04.754486 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:07:04.754523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:07:04.756075 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:07:04.773443 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:07:04.773585 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:07:04.775400 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:07:04.775539 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:07:04.776916 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:07:04.777186 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:07:04.778188 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:07:04.778221 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:07:04.779240 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:07:04.779290 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:07:04.780698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:07:04.780739 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:07:04.782266 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:07:04.782315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:04.793353 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:07:04.796536 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:07:04.796655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:07:04.798604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:07:04.798670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:04.801372 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:07:04.801474 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:07:04.802795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:07:04.808217 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:07:04.820362 systemd[1]: Switching root. Nov 8 00:07:04.849087 systemd-journald[236]: Journal stopped Nov 8 00:07:05.848346 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Nov 8 00:07:05.848424 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:07:05.848437 kernel: SELinux: policy capability open_perms=1 Nov 8 00:07:05.848451 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:07:05.848465 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:07:05.848475 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:07:05.848489 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:07:05.848499 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:07:05.848514 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:07:05.848524 kernel: audit: type=1403 audit(1762560425.039:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:07:05.848535 systemd[1]: Successfully loaded SELinux policy in 35.502ms. Nov 8 00:07:05.848594 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.155ms. Nov 8 00:07:05.848609 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:07:05.848620 systemd[1]: Detected virtualization kvm. Nov 8 00:07:05.848633 systemd[1]: Detected architecture arm64. Nov 8 00:07:05.848644 systemd[1]: Detected first boot. Nov 8 00:07:05.848654 systemd[1]: Hostname set to . Nov 8 00:07:05.848665 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:07:05.848675 zram_generator::config[1068]: No configuration found. Nov 8 00:07:05.850395 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:07:05.850436 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:07:05.850448 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:07:05.850476 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:07:05.850487 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:07:05.850497 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:07:05.850507 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:07:05.850518 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:07:05.850529 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:07:05.850540 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:07:05.850567 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:07:05.850581 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:07:05.850595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:07:05.850606 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:07:05.850616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:07:05.850626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:07:05.850637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:07:05.850647 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 8 00:07:05.850657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:07:05.850668 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:07:05.850680 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:07:05.850695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:07:05.850706 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:07:05.850716 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:07:05.850727 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:07:05.850737 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:07:05.850747 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:07:05.850759 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:07:05.850770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:07:05.850781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:07:05.850791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:07:05.850802 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:07:05.850813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:07:05.850824 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:07:05.850834 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:07:05.850845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:07:05.850857 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:07:05.850867 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:07:05.850878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:07:05.850888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:05.850899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:07:05.850912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:07:05.850924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:05.850937 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:07:05.850948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:05.851022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:07:05.851035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:05.851046 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:07:05.851056 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:07:05.851068 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:07:05.851080 kernel: loop: module loaded Nov 8 00:07:05.851092 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:07:05.851103 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:07:05.851113 kernel: fuse: init (API version 7.39) Nov 8 00:07:05.851123 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:07:05.851134 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:07:05.851176 systemd-journald[1157]: Collecting audit messages is disabled. Nov 8 00:07:05.851204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:07:05.851217 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:07:05.851229 systemd-journald[1157]: Journal started Nov 8 00:07:05.851250 systemd-journald[1157]: Runtime Journal (/run/log/journal/d0c4569b7d6845af9a5a8e62cdb1164a) is 8.0M, max 76.6M, 68.6M free. Nov 8 00:07:05.853092 kernel: ACPI: bus type drm_connector registered Nov 8 00:07:05.856187 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:07:05.869270 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:07:05.873040 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:07:05.874900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:07:05.876339 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:07:05.877247 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:07:05.878301 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:07:05.879627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:07:05.880885 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:07:05.881156 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:07:05.882350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:05.882658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:05.884053 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:07:05.884303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:07:05.885403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:05.885600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:05.886716 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:07:05.886977 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:07:05.887879 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:05.888374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:05.889621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:07:05.890742 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:07:05.892220 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:07:05.905850 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:07:05.914110 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:07:05.916754 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:07:05.917808 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:07:05.926289 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:07:05.939398 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:07:05.941943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:07:05.945313 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:07:05.947288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:07:05.956911 systemd-journald[1157]: Time spent on flushing to /var/log/journal/d0c4569b7d6845af9a5a8e62cdb1164a is 71.878ms for 1106 entries. Nov 8 00:07:05.956911 systemd-journald[1157]: System Journal (/var/log/journal/d0c4569b7d6845af9a5a8e62cdb1164a) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:07:06.045089 systemd-journald[1157]: Received client request to flush runtime journal. Nov 8 00:07:05.957947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:07:05.962163 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:07:05.967755 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:07:05.970252 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:07:05.998526 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:07:05.999431 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:07:06.017511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:07:06.027159 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:07:06.028333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:07:06.040015 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 8 00:07:06.040026 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 8 00:07:06.048596 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:07:06.050586 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:07:06.065340 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:07:06.067334 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:07:06.097177 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:07:06.105263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:07:06.125856 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 8 00:07:06.126270 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 8 00:07:06.131839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:07:06.497844 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:07:06.506200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:07:06.530634 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Nov 8 00:07:06.559578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:07:06.576152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:07:06.605332 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:07:06.637711 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Nov 8 00:07:06.679205 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:07:06.770385 systemd-networkd[1240]: lo: Link UP Nov 8 00:07:06.775983 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1236) Nov 8 00:07:06.771145 systemd-networkd[1240]: lo: Gained carrier Nov 8 00:07:06.772883 systemd-networkd[1240]: Enumeration completed Nov 8 00:07:06.773055 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:07:06.778256 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:06.778942 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:06.779919 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:06.780027 systemd-networkd[1240]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:06.780668 systemd-networkd[1240]: eth0: Link UP Nov 8 00:07:06.781273 systemd-networkd[1240]: eth0: Gained carrier Nov 8 00:07:06.781434 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:06.783172 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:07:06.784733 systemd-networkd[1240]: eth1: Link UP Nov 8 00:07:06.784863 systemd-networkd[1240]: eth1: Gained carrier Nov 8 00:07:06.784969 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:06.804986 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:07:06.830019 systemd-networkd[1240]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:07:06.872233 systemd-networkd[1240]: eth0: DHCPv4 address 138.199.234.199/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:07:06.873419 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:06.890240 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 8 00:07:06.890606 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Nov 8 00:07:06.890775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:06.893446 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:06.899094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:06.908449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:06.913996 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 8 00:07:06.914065 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:07:06.914079 kernel: [drm] features: -context_init Nov 8 00:07:06.914322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:06.917679 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:07:06.917733 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:07:06.918156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:06.918319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:06.925169 kernel: [drm] number of scanouts: 1 Nov 8 00:07:06.925231 kernel: [drm] number of cap sets: 0 Nov 8 00:07:06.926281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:06.926471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:06.932948 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 8 00:07:06.940002 kernel: Console: switching to colour frame buffer device 160x50 Nov 8 00:07:06.945668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:07:06.948296 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:07:06.951278 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:06.951507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:06.962252 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:07:06.962309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:07:06.970317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:07.027344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:07.079843 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:07:07.086257 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:07:07.101020 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:07:07.129518 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:07:07.131120 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:07:07.137287 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:07:07.144158 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:07:07.175830 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:07:07.178165 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:07:07.178947 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:07:07.179075 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:07:07.179685 systemd[1]: Reached target machines.target - Containers. Nov 8 00:07:07.181826 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:07:07.189208 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:07:07.197147 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:07:07.199057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:07.207943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:07:07.212459 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:07:07.223232 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:07:07.224628 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:07:07.229578 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:07:07.247028 kernel: loop0: detected capacity change from 0 to 114328 Nov 8 00:07:07.254021 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:07:07.255981 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:07:07.275021 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:07:07.311014 kernel: loop1: detected capacity change from 0 to 114432 Nov 8 00:07:07.339554 kernel: loop2: detected capacity change from 0 to 207008 Nov 8 00:07:07.377989 kernel: loop3: detected capacity change from 0 to 8 Nov 8 00:07:07.406993 kernel: loop4: detected capacity change from 0 to 114328 Nov 8 00:07:07.420263 kernel: loop5: detected capacity change from 0 to 114432 Nov 8 00:07:07.436095 kernel: loop6: detected capacity change from 0 to 207008 Nov 8 00:07:07.449975 kernel: loop7: detected capacity change from 0 to 8 Nov 8 00:07:07.450219 (sd-merge)[1327]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 8 00:07:07.450703 (sd-merge)[1327]: Merged extensions into '/usr'. Nov 8 00:07:07.467004 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:07:07.467021 systemd[1]: Reloading... Nov 8 00:07:07.540984 zram_generator::config[1355]: No configuration found. Nov 8 00:07:07.630326 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:07:07.674522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:07:07.742737 systemd[1]: Reloading finished in 275 ms. Nov 8 00:07:07.760756 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:07:07.763114 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:07:07.772215 systemd[1]: Starting ensure-sysext.service... Nov 8 00:07:07.780266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:07:07.783744 systemd[1]: Reloading requested from client PID 1399 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:07:07.784327 systemd[1]: Reloading... Nov 8 00:07:07.816421 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:07:07.816733 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:07:07.818929 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:07:07.819380 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Nov 8 00:07:07.819510 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Nov 8 00:07:07.823170 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:07:07.823282 systemd-tmpfiles[1400]: Skipping /boot Nov 8 00:07:07.831864 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:07:07.831877 systemd-tmpfiles[1400]: Skipping /boot Nov 8 00:07:07.874018 zram_generator::config[1435]: No configuration found. Nov 8 00:07:07.979586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:07:08.047470 systemd[1]: Reloading finished in 262 ms. Nov 8 00:07:08.064612 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:07:08.078287 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:07:08.086235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:07:08.090235 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:07:08.095877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:07:08.108991 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:07:08.125648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:08.131075 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:08.140312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:08.153939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:08.155199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:08.158636 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:07:08.160162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:08.160453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:08.171005 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:08.175985 augenrules[1501]: No rules Nov 8 00:07:08.177467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:08.178167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:08.182269 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:07:08.190185 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:07:08.193472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:08.193910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:08.197780 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:08.197974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:08.205668 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:07:08.209721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:08.212192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:08.226408 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:07:08.229094 systemd[1]: Finished ensure-sysext.service. Nov 8 00:07:08.233721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:08.239209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:08.250185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:07:08.254740 systemd-resolved[1478]: Positive Trust Anchors: Nov 8 00:07:08.254757 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:07:08.254791 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:07:08.255351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:08.261190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:08.262207 systemd-resolved[1478]: Using system hostname 'ci-4081-3-6-n-3f5a11d2fe'. Nov 8 00:07:08.263252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:08.271748 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:07:08.273061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:07:08.274179 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:07:08.277168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:08.277342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:08.279802 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:07:08.280123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:07:08.282066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:08.282923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:08.284012 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:08.284321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:08.289990 systemd[1]: Reached target network.target - Network. Nov 8 00:07:08.290692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:07:08.291394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:07:08.291476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:07:08.291514 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:07:08.342290 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:07:08.344375 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:07:08.345396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:07:08.346360 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:07:08.347299 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:07:08.348142 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:07:08.348254 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:07:08.348827 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:07:08.349769 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:07:08.350540 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:07:08.351223 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:07:08.352917 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:07:08.355185 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:07:08.357577 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:07:08.360482 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:07:08.361441 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:07:08.362470 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:07:08.363614 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:07:08.363783 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:07:08.363869 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:07:08.365298 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:07:08.371173 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:07:08.376197 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:07:08.386217 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:07:08.389246 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:07:08.395056 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:07:08.396066 coreos-metadata[1543]: Nov 08 00:07:08.396 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 8 00:07:08.398483 coreos-metadata[1543]: Nov 08 00:07:08.397 INFO Fetch successful Nov 8 00:07:08.398665 coreos-metadata[1543]: Nov 08 00:07:08.398 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 8 00:07:08.399077 coreos-metadata[1543]: Nov 08 00:07:08.399 INFO Fetch successful Nov 8 00:07:08.401035 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:07:08.410142 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:07:08.417977 jq[1547]: false Nov 8 00:07:08.418636 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 8 00:07:08.431752 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:07:08.441883 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:07:08.445824 dbus-daemon[1544]: [system] SELinux support is enabled Nov 8 00:07:08.458061 extend-filesystems[1549]: Found loop4 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found loop5 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found loop6 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found loop7 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda1 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda2 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda3 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found usr Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda4 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda6 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda7 Nov 8 00:07:08.458061 extend-filesystems[1549]: Found sda9 Nov 8 00:07:08.458061 extend-filesystems[1549]: Checking size of /dev/sda9 Nov 8 00:07:08.456217 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:07:08.510101 extend-filesystems[1549]: Resized partition /dev/sda9 Nov 8 00:07:08.515046 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 8 00:07:08.457675 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:07:08.515217 extend-filesystems[1577]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:07:08.469744 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:07:08.480228 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:07:08.483045 systemd-networkd[1240]: eth0: Gained IPv6LL Nov 8 00:07:08.521772 jq[1574]: true Nov 8 00:07:08.494816 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:07:08.502317 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:07:08.518937 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:07:08.519208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:07:08.519468 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:07:08.519765 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:07:08.523841 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:07:08.524135 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:07:08.545366 systemd-timesyncd[1529]: Contacted time server 93.177.65.20:123 (0.flatcar.pool.ntp.org). Nov 8 00:07:08.545507 systemd-timesyncd[1529]: Initial clock synchronization to Sat 2025-11-08 00:07:08.167696 UTC. Nov 8 00:07:08.546924 systemd-networkd[1240]: eth1: Gained IPv6LL Nov 8 00:07:08.560494 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:07:08.565992 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:07:08.568928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:08.585375 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:07:08.586073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:07:08.586118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:07:08.586561 jq[1585]: true Nov 8 00:07:08.594465 update_engine[1569]: I20251108 00:07:08.592237 1569 main.cc:92] Flatcar Update Engine starting Nov 8 00:07:08.586823 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:07:08.586839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:07:08.606295 update_engine[1569]: I20251108 00:07:08.606223 1569 update_check_scheduler.cc:74] Next update check in 10m44s Nov 8 00:07:08.613596 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:07:08.624512 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:07:08.627392 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:07:08.639460 tar[1584]: linux-arm64/LICENSE Nov 8 00:07:08.639460 tar[1584]: linux-arm64/helm Nov 8 00:07:08.655367 systemd-logind[1565]: New seat seat0. Nov 8 00:07:08.720836 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1241) Nov 8 00:07:08.667746 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (Power Button) Nov 8 00:07:08.667764 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 8 00:07:08.712142 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:07:08.737947 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:07:08.776830 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:07:08.782732 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:07:08.809370 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 8 00:07:08.822106 extend-filesystems[1577]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:07:08.822106 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 8 00:07:08.822106 extend-filesystems[1577]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 8 00:07:08.829447 extend-filesystems[1549]: Resized filesystem in /dev/sda9 Nov 8 00:07:08.829447 extend-filesystems[1549]: Found sr0 Nov 8 00:07:08.836050 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:07:08.841877 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:07:08.842256 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:07:08.849803 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:07:08.860402 containerd[1590]: time="2025-11-08T00:07:08.860311920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:07:08.868461 systemd[1]: Starting sshkeys.service... Nov 8 00:07:08.921936 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:07:08.937304 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:07:08.944980 containerd[1590]: time="2025-11-08T00:07:08.944915360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.946635 containerd[1590]: time="2025-11-08T00:07:08.946590520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:08.946743 containerd[1590]: time="2025-11-08T00:07:08.946730320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:07:08.946815 containerd[1590]: time="2025-11-08T00:07:08.946802640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:07:08.947062 containerd[1590]: time="2025-11-08T00:07:08.947043800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:07:08.947143 containerd[1590]: time="2025-11-08T00:07:08.947129560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.947259 containerd[1590]: time="2025-11-08T00:07:08.947242200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:08.947311 containerd[1590]: time="2025-11-08T00:07:08.947299360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.947603 containerd[1590]: time="2025-11-08T00:07:08.947580920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948062 containerd[1590]: time="2025-11-08T00:07:08.947666560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948062 containerd[1590]: time="2025-11-08T00:07:08.947687920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948062 containerd[1590]: time="2025-11-08T00:07:08.947699680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948062 containerd[1590]: time="2025-11-08T00:07:08.947786720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948062 containerd[1590]: time="2025-11-08T00:07:08.948028760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948395 containerd[1590]: time="2025-11-08T00:07:08.948374360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:08.948464 containerd[1590]: time="2025-11-08T00:07:08.948452080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:07:08.948664 containerd[1590]: time="2025-11-08T00:07:08.948643440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:07:08.948766 containerd[1590]: time="2025-11-08T00:07:08.948751640Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:07:08.955165 containerd[1590]: time="2025-11-08T00:07:08.955021560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:07:08.955165 containerd[1590]: time="2025-11-08T00:07:08.955098440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:07:08.955165 containerd[1590]: time="2025-11-08T00:07:08.955116200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:07:08.955165 containerd[1590]: time="2025-11-08T00:07:08.955136360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957096920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957285040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957661400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957775480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957791520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957807320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957824040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957836560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957851800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957879600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957899360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957912560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957939200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.958976 containerd[1590]: time="2025-11-08T00:07:08.957967120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958007720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958057280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958071480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958087400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958100040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958118480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958131720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958144440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958160720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958178600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958190600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958207240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958220680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958250120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:07:08.959354 containerd[1590]: time="2025-11-08T00:07:08.958272080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958289840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958302840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958501360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958543000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958560880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958575080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958585000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958597000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958610040Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:07:08.959769 containerd[1590]: time="2025-11-08T00:07:08.958621560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:07:08.962988 containerd[1590]: time="2025-11-08T00:07:08.961141320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:07:08.962988 containerd[1590]: time="2025-11-08T00:07:08.961220840Z" level=info msg="Connect containerd service" Nov 8 00:07:08.962988 containerd[1590]: time="2025-11-08T00:07:08.961329080Z" level=info msg="using legacy CRI server" Nov 8 00:07:08.962988 containerd[1590]: time="2025-11-08T00:07:08.961336720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:07:08.962988 containerd[1590]: time="2025-11-08T00:07:08.961424000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.965071800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966128320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966181400Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966409120Z" level=info msg="Start subscribing containerd event" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966452920Z" level=info msg="Start recovering state" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966559240Z" level=info msg="Start event monitor" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966576360Z" level=info msg="Start snapshots syncer" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966587040Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966594080Z" level=info msg="Start streaming server" Nov 8 00:07:08.968971 containerd[1590]: time="2025-11-08T00:07:08.966774320Z" level=info msg="containerd successfully booted in 0.113076s" Nov 8 00:07:08.966924 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:07:08.993008 coreos-metadata[1657]: Nov 08 00:07:08.992 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 8 00:07:08.994081 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:07:08.995614 coreos-metadata[1657]: Nov 08 00:07:08.994 INFO Fetch successful Nov 8 00:07:08.998315 unknown[1657]: wrote ssh authorized keys file for user: core Nov 8 00:07:09.022079 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:07:09.026181 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:07:09.036361 systemd[1]: Finished sshkeys.service. Nov 8 00:07:09.714358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:09.728310 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:09.737513 tar[1584]: linux-arm64/README.md Nov 8 00:07:09.757481 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:07:10.158803 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:07:10.187376 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:07:10.198315 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:07:10.207248 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:07:10.207529 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:07:10.215691 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:07:10.228475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:07:10.233628 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:07:10.241310 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 8 00:07:10.245274 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:07:10.247167 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:07:10.248481 systemd[1]: Startup finished in 6.158s (kernel) + 5.243s (userspace) = 11.401s. Nov 8 00:07:10.251859 kubelet[1679]: E1108 00:07:10.251785 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:10.259395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:10.259636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:20.443305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:07:20.456404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:20.585330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:20.586194 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:20.645892 kubelet[1728]: E1108 00:07:20.645818 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:20.650558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:20.650789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:30.693473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:07:30.709236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:30.830311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:30.843640 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:30.898616 kubelet[1748]: E1108 00:07:30.898548 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:30.902180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:30.902368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:40.943505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:07:40.950287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:41.095147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:41.099926 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:41.145180 kubelet[1768]: E1108 00:07:41.145102 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:41.150201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:41.150398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:43.286567 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:07:43.300534 systemd[1]: Started sshd@0-138.199.234.199:22-139.178.68.195:60600.service - OpenSSH per-connection server daemon (139.178.68.195:60600). Nov 8 00:07:44.237656 sshd[1776]: Accepted publickey for core from 139.178.68.195 port 60600 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:44.240407 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:44.250232 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:07:44.257139 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:07:44.261235 systemd-logind[1565]: New session 1 of user core. Nov 8 00:07:44.270602 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:07:44.281573 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:07:44.285865 (systemd)[1782]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:07:44.400111 systemd[1782]: Queued start job for default target default.target. Nov 8 00:07:44.400860 systemd[1782]: Created slice app.slice - User Application Slice. Nov 8 00:07:44.400882 systemd[1782]: Reached target paths.target - Paths. Nov 8 00:07:44.400893 systemd[1782]: Reached target timers.target - Timers. Nov 8 00:07:44.407154 systemd[1782]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:07:44.417395 systemd[1782]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:07:44.417710 systemd[1782]: Reached target sockets.target - Sockets. Nov 8 00:07:44.417874 systemd[1782]: Reached target basic.target - Basic System. Nov 8 00:07:44.418011 systemd[1782]: Reached target default.target - Main User Target. Nov 8 00:07:44.418191 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:07:44.418463 systemd[1782]: Startup finished in 124ms. Nov 8 00:07:44.431689 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:07:45.089442 systemd[1]: Started sshd@1-138.199.234.199:22-139.178.68.195:60608.service - OpenSSH per-connection server daemon (139.178.68.195:60608). Nov 8 00:07:46.044635 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 60608 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:46.047281 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:46.052671 systemd-logind[1565]: New session 2 of user core. Nov 8 00:07:46.060496 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:07:46.709395 sshd[1794]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:46.715385 systemd[1]: sshd@1-138.199.234.199:22-139.178.68.195:60608.service: Deactivated successfully. Nov 8 00:07:46.719435 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:07:46.720528 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:07:46.721548 systemd-logind[1565]: Removed session 2. Nov 8 00:07:46.865744 systemd[1]: Started sshd@2-138.199.234.199:22-139.178.68.195:60620.service - OpenSSH per-connection server daemon (139.178.68.195:60620). Nov 8 00:07:47.805810 sshd[1802]: Accepted publickey for core from 139.178.68.195 port 60620 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:47.808421 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:47.816342 systemd-logind[1565]: New session 3 of user core. Nov 8 00:07:47.824450 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:07:48.450488 sshd[1802]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:48.454568 systemd[1]: sshd@2-138.199.234.199:22-139.178.68.195:60620.service: Deactivated successfully. Nov 8 00:07:48.458555 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:07:48.459251 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:07:48.460409 systemd-logind[1565]: Removed session 3. Nov 8 00:07:48.613447 systemd[1]: Started sshd@3-138.199.234.199:22-139.178.68.195:60630.service - OpenSSH per-connection server daemon (139.178.68.195:60630). Nov 8 00:07:49.558964 sshd[1810]: Accepted publickey for core from 139.178.68.195 port 60630 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:49.561436 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:49.567583 systemd-logind[1565]: New session 4 of user core. Nov 8 00:07:49.576626 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:07:50.218016 sshd[1810]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:50.223469 systemd[1]: sshd@3-138.199.234.199:22-139.178.68.195:60630.service: Deactivated successfully. Nov 8 00:07:50.223534 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:07:50.226469 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:07:50.227538 systemd-logind[1565]: Removed session 4. Nov 8 00:07:50.383492 systemd[1]: Started sshd@4-138.199.234.199:22-139.178.68.195:60642.service - OpenSSH per-connection server daemon (139.178.68.195:60642). Nov 8 00:07:51.155000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:07:51.163314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:51.296168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:51.300080 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:51.316995 sshd[1818]: Accepted publickey for core from 139.178.68.195 port 60642 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:51.319075 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:51.326484 systemd-logind[1565]: New session 5 of user core. Nov 8 00:07:51.334548 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:07:51.344107 kubelet[1832]: E1108 00:07:51.344067 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:51.347178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:51.347626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:51.829905 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:07:51.830306 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:07:51.843058 sudo[1842]: pam_unix(sudo:session): session closed for user root Nov 8 00:07:51.996367 sshd[1818]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:52.001076 systemd[1]: sshd@4-138.199.234.199:22-139.178.68.195:60642.service: Deactivated successfully. Nov 8 00:07:52.005325 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:07:52.006751 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:07:52.007947 systemd-logind[1565]: Removed session 5. Nov 8 00:07:52.161489 systemd[1]: Started sshd@5-138.199.234.199:22-139.178.68.195:60658.service - OpenSSH per-connection server daemon (139.178.68.195:60658). Nov 8 00:07:53.095997 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 60658 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:53.098560 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:53.105235 systemd-logind[1565]: New session 6 of user core. Nov 8 00:07:53.113497 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:07:53.599269 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:07:53.599564 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:07:53.603711 sudo[1852]: pam_unix(sudo:session): session closed for user root Nov 8 00:07:53.609207 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:07:53.609478 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:07:53.630649 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:07:53.632240 auditctl[1855]: No rules Nov 8 00:07:53.633918 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:07:53.634231 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:07:53.637720 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:07:53.667038 augenrules[1874]: No rules Nov 8 00:07:53.669343 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:07:53.671470 sudo[1851]: pam_unix(sudo:session): session closed for user root Nov 8 00:07:53.824326 sshd[1847]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:53.829188 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:07:53.830479 systemd[1]: sshd@5-138.199.234.199:22-139.178.68.195:60658.service: Deactivated successfully. Nov 8 00:07:53.835650 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:07:53.837452 systemd-logind[1565]: Removed session 6. Nov 8 00:07:53.847396 update_engine[1569]: I20251108 00:07:53.847313 1569 update_attempter.cc:509] Updating boot flags... Nov 8 00:07:53.902024 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1892) Nov 8 00:07:53.988983 systemd[1]: Started sshd@6-138.199.234.199:22-139.178.68.195:43686.service - OpenSSH per-connection server daemon (139.178.68.195:43686). Nov 8 00:07:54.924924 sshd[1898]: Accepted publickey for core from 139.178.68.195 port 43686 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:07:54.927564 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:54.932980 systemd-logind[1565]: New session 7 of user core. Nov 8 00:07:54.948585 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:07:55.428945 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:07:55.429307 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:07:55.735529 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:07:55.735877 (dockerd)[1917]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:07:55.990320 dockerd[1917]: time="2025-11-08T00:07:55.990143333Z" level=info msg="Starting up" Nov 8 00:07:56.100704 dockerd[1917]: time="2025-11-08T00:07:56.100664042Z" level=info msg="Loading containers: start." Nov 8 00:07:56.224175 kernel: Initializing XFRM netlink socket Nov 8 00:07:56.318166 systemd-networkd[1240]: docker0: Link UP Nov 8 00:07:56.338002 dockerd[1917]: time="2025-11-08T00:07:56.337732495Z" level=info msg="Loading containers: done." Nov 8 00:07:56.360852 dockerd[1917]: time="2025-11-08T00:07:56.360758352Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:07:56.361206 dockerd[1917]: time="2025-11-08T00:07:56.361036079Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:07:56.361341 dockerd[1917]: time="2025-11-08T00:07:56.361297604Z" level=info msg="Daemon has completed initialization" Nov 8 00:07:56.401052 dockerd[1917]: time="2025-11-08T00:07:56.400405330Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:07:56.401587 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:07:57.494479 containerd[1590]: time="2025-11-08T00:07:57.494418493Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:07:58.100748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099638757.mount: Deactivated successfully. Nov 8 00:07:59.798997 containerd[1590]: time="2025-11-08T00:07:59.798279709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:59.799702 containerd[1590]: time="2025-11-08T00:07:59.799661954Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363783" Nov 8 00:07:59.800481 containerd[1590]: time="2025-11-08T00:07:59.800437749Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:59.803811 containerd[1590]: time="2025-11-08T00:07:59.803738358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:07:59.805237 containerd[1590]: time="2025-11-08T00:07:59.805198775Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.310724313s" Nov 8 00:07:59.805398 containerd[1590]: time="2025-11-08T00:07:59.805381842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 8 00:07:59.806386 containerd[1590]: time="2025-11-08T00:07:59.806354986Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:08:01.422667 containerd[1590]: time="2025-11-08T00:08:01.422592736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:01.424362 containerd[1590]: time="2025-11-08T00:08:01.424306408Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531220" Nov 8 00:08:01.425258 containerd[1590]: time="2025-11-08T00:08:01.425185607Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:01.428249 containerd[1590]: time="2025-11-08T00:08:01.428206096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:01.431101 containerd[1590]: time="2025-11-08T00:08:01.430791247Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.624392174s" Nov 8 00:08:01.431101 containerd[1590]: time="2025-11-08T00:08:01.430834292Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 8 00:08:01.431647 containerd[1590]: time="2025-11-08T00:08:01.431622719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:08:01.443012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:08:01.453433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:01.579193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:01.582941 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:08:01.630454 kubelet[2126]: E1108 00:08:01.630388 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:08:01.635224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:08:01.635421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:02.918077 containerd[1590]: time="2025-11-08T00:08:02.918016338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:02.919720 containerd[1590]: time="2025-11-08T00:08:02.919156365Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484344" Nov 8 00:08:02.921981 containerd[1590]: time="2025-11-08T00:08:02.921234475Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:02.927320 containerd[1590]: time="2025-11-08T00:08:02.927275098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:02.928835 containerd[1590]: time="2025-11-08T00:08:02.928787014Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.49713005s" Nov 8 00:08:02.928835 containerd[1590]: time="2025-11-08T00:08:02.928834620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 8 00:08:02.929769 containerd[1590]: time="2025-11-08T00:08:02.929728136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:08:04.275110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946545062.mount: Deactivated successfully. Nov 8 00:08:04.632109 containerd[1590]: time="2025-11-08T00:08:04.631734961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:04.633591 containerd[1590]: time="2025-11-08T00:08:04.633532735Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417843" Nov 8 00:08:04.634620 containerd[1590]: time="2025-11-08T00:08:04.634562778Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:04.637021 containerd[1590]: time="2025-11-08T00:08:04.636908097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:04.638223 containerd[1590]: time="2025-11-08T00:08:04.638185009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.708412948s" Nov 8 00:08:04.638513 containerd[1590]: time="2025-11-08T00:08:04.638323786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 8 00:08:04.639018 containerd[1590]: time="2025-11-08T00:08:04.638989425Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:08:05.269936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450703178.mount: Deactivated successfully. Nov 8 00:08:06.108982 containerd[1590]: time="2025-11-08T00:08:06.106934225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:06.108982 containerd[1590]: time="2025-11-08T00:08:06.108271251Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Nov 8 00:08:06.108982 containerd[1590]: time="2025-11-08T00:08:06.108900600Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:06.112282 containerd[1590]: time="2025-11-08T00:08:06.112243528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:06.113733 containerd[1590]: time="2025-11-08T00:08:06.113684246Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.474657256s" Nov 8 00:08:06.113733 containerd[1590]: time="2025-11-08T00:08:06.113729691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 8 00:08:06.115168 containerd[1590]: time="2025-11-08T00:08:06.115129604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:08:06.672520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958152123.mount: Deactivated successfully. Nov 8 00:08:06.680905 containerd[1590]: time="2025-11-08T00:08:06.679977110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:06.681096 containerd[1590]: time="2025-11-08T00:08:06.681075030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 8 00:08:06.681949 containerd[1590]: time="2025-11-08T00:08:06.681924084Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:06.684886 containerd[1590]: time="2025-11-08T00:08:06.684822162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:06.687296 containerd[1590]: time="2025-11-08T00:08:06.687239747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 572.056977ms" Nov 8 00:08:06.687658 containerd[1590]: time="2025-11-08T00:08:06.687476293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 8 00:08:06.688864 containerd[1590]: time="2025-11-08T00:08:06.688164249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:08:07.304913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036076432.mount: Deactivated successfully. Nov 8 00:08:09.657502 containerd[1590]: time="2025-11-08T00:08:09.657407037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:09.658974 containerd[1590]: time="2025-11-08T00:08:09.658885462Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Nov 8 00:08:09.660395 containerd[1590]: time="2025-11-08T00:08:09.660346445Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:09.665683 containerd[1590]: time="2025-11-08T00:08:09.665597399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:09.667977 containerd[1590]: time="2025-11-08T00:08:09.667584073Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.9793773s" Nov 8 00:08:09.667977 containerd[1590]: time="2025-11-08T00:08:09.667635358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 8 00:08:11.692630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:08:11.701292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:11.834853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:11.844879 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:08:11.896477 kubelet[2289]: E1108 00:08:11.896432 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:08:11.901157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:08:11.901319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:14.864941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:14.874481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:14.909275 systemd[1]: Reloading requested from client PID 2306 ('systemctl') (unit session-7.scope)... Nov 8 00:08:14.909297 systemd[1]: Reloading... Nov 8 00:08:15.022989 zram_generator::config[2350]: No configuration found. Nov 8 00:08:15.153178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:08:15.231859 systemd[1]: Reloading finished in 322 ms. Nov 8 00:08:15.296466 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:08:15.296604 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:08:15.297228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:15.303862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:15.436215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:15.452712 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:08:15.501090 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:15.501090 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:08:15.501090 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:15.501597 kubelet[2407]: I1108 00:08:15.501130 2407 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:08:16.453986 kubelet[2407]: I1108 00:08:16.452190 2407 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:08:16.453986 kubelet[2407]: I1108 00:08:16.452226 2407 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:08:16.453986 kubelet[2407]: I1108 00:08:16.452564 2407 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:08:16.483532 kubelet[2407]: E1108 00:08:16.483486 2407 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.234.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:16.490540 kubelet[2407]: I1108 00:08:16.490460 2407 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:08:16.498593 kubelet[2407]: E1108 00:08:16.498526 2407 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:08:16.498593 kubelet[2407]: I1108 00:08:16.498580 2407 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:08:16.502365 kubelet[2407]: I1108 00:08:16.502321 2407 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:08:16.502846 kubelet[2407]: I1108 00:08:16.502772 2407 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:08:16.503071 kubelet[2407]: I1108 00:08:16.502799 2407 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-3f5a11d2fe","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:08:16.503225 kubelet[2407]: I1108 00:08:16.503125 2407 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:08:16.503225 kubelet[2407]: I1108 00:08:16.503137 2407 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:08:16.503396 kubelet[2407]: I1108 00:08:16.503343 2407 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:16.506771 kubelet[2407]: I1108 00:08:16.506699 2407 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:08:16.506929 kubelet[2407]: I1108 00:08:16.506833 2407 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:08:16.506929 kubelet[2407]: I1108 00:08:16.506859 2407 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:08:16.506929 kubelet[2407]: I1108 00:08:16.506870 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:08:16.508161 kubelet[2407]: W1108 00:08:16.507920 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.234.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3f5a11d2fe&limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:16.508270 kubelet[2407]: E1108 00:08:16.508179 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.234.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3f5a11d2fe&limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:16.512004 kubelet[2407]: W1108 00:08:16.511085 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.234.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:16.512004 kubelet[2407]: E1108 00:08:16.511146 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.234.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:16.512004 kubelet[2407]: I1108 00:08:16.511462 2407 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:08:16.512255 kubelet[2407]: I1108 00:08:16.512225 2407 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:08:16.512392 kubelet[2407]: W1108 00:08:16.512358 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:08:16.514473 kubelet[2407]: I1108 00:08:16.514433 2407 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:08:16.514473 kubelet[2407]: I1108 00:08:16.514480 2407 server.go:1287] "Started kubelet" Nov 8 00:08:16.520589 kubelet[2407]: I1108 00:08:16.520549 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:08:16.524368 kubelet[2407]: E1108 00:08:16.524103 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.234.199:6443/api/v1/namespaces/default/events\": dial tcp 138.199.234.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-3f5a11d2fe.1875df6ee1e15c66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-3f5a11d2fe,UID:ci-4081-3-6-n-3f5a11d2fe,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-3f5a11d2fe,},FirstTimestamp:2025-11-08 00:08:16.514456678 +0000 UTC m=+1.058036803,LastTimestamp:2025-11-08 00:08:16.514456678 +0000 UTC m=+1.058036803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-3f5a11d2fe,}" Nov 8 00:08:16.527629 kubelet[2407]: I1108 00:08:16.527597 2407 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:08:16.527814 kubelet[2407]: I1108 00:08:16.527783 2407 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:08:16.527937 kubelet[2407]: E1108 00:08:16.527912 2407 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" Nov 8 00:08:16.528836 kubelet[2407]: I1108 00:08:16.528810 2407 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:08:16.529509 kubelet[2407]: I1108 00:08:16.529478 2407 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:08:16.529576 kubelet[2407]: I1108 00:08:16.529553 2407 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:08:16.530861 kubelet[2407]: E1108 00:08:16.530813 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3f5a11d2fe?timeout=10s\": dial tcp 138.199.234.199:6443: connect: connection refused" interval="200ms" Nov 8 00:08:16.531610 kubelet[2407]: E1108 00:08:16.531575 2407 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:08:16.532501 kubelet[2407]: I1108 00:08:16.532440 2407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:08:16.532826 kubelet[2407]: I1108 00:08:16.532810 2407 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:08:16.533481 kubelet[2407]: I1108 00:08:16.533450 2407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:08:16.536979 kubelet[2407]: W1108 00:08:16.536911 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.234.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:16.537151 kubelet[2407]: E1108 00:08:16.537128 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.234.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:16.541001 kubelet[2407]: I1108 00:08:16.540650 2407 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:08:16.541813 kubelet[2407]: I1108 00:08:16.541790 2407 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:08:16.542107 kubelet[2407]: I1108 00:08:16.542081 2407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:08:16.565279 kubelet[2407]: I1108 00:08:16.564929 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:08:16.566452 kubelet[2407]: I1108 00:08:16.566128 2407 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:08:16.566452 kubelet[2407]: I1108 00:08:16.566155 2407 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:08:16.566452 kubelet[2407]: I1108 00:08:16.566175 2407 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:16.567332 kubelet[2407]: I1108 00:08:16.567293 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:08:16.567723 kubelet[2407]: I1108 00:08:16.567417 2407 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:08:16.567723 kubelet[2407]: I1108 00:08:16.567444 2407 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:08:16.567723 kubelet[2407]: I1108 00:08:16.567451 2407 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:08:16.567723 kubelet[2407]: E1108 00:08:16.567590 2407 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:08:16.570284 kubelet[2407]: I1108 00:08:16.569911 2407 policy_none.go:49] "None policy: Start" Nov 8 00:08:16.570284 kubelet[2407]: I1108 00:08:16.569937 2407 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:08:16.570284 kubelet[2407]: I1108 00:08:16.569964 2407 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:08:16.570284 kubelet[2407]: W1108 00:08:16.570135 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.234.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:16.570284 kubelet[2407]: E1108 00:08:16.570189 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.234.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:16.580433 kubelet[2407]: I1108 00:08:16.580124 2407 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:08:16.581044 kubelet[2407]: I1108 00:08:16.581026 2407 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:08:16.581161 kubelet[2407]: I1108 00:08:16.581121 2407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:08:16.581568 kubelet[2407]: I1108 00:08:16.581550 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:08:16.583263 kubelet[2407]: E1108 00:08:16.583138 2407 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:08:16.583263 kubelet[2407]: E1108 00:08:16.583250 2407 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-3f5a11d2fe\" not found" Nov 8 00:08:16.675049 kubelet[2407]: E1108 00:08:16.674629 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.679237 kubelet[2407]: E1108 00:08:16.679195 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.682277 kubelet[2407]: E1108 00:08:16.682013 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.684055 kubelet[2407]: I1108 00:08:16.684031 2407 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.684734 kubelet[2407]: E1108 00:08:16.684708 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.234.199:6443/api/v1/nodes\": dial tcp 138.199.234.199:6443: connect: connection refused" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731155 kubelet[2407]: I1108 00:08:16.730424 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b48b531cc340e4946358e39aa9a16365-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"b48b531cc340e4946358e39aa9a16365\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731155 kubelet[2407]: I1108 00:08:16.730524 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731155 kubelet[2407]: I1108 00:08:16.730547 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731155 kubelet[2407]: I1108 00:08:16.730580 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a722946f6d98eca7ba448b846cf941d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"5a722946f6d98eca7ba448b846cf941d\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731155 kubelet[2407]: I1108 00:08:16.730595 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731814 kubelet[2407]: I1108 00:08:16.730611 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b48b531cc340e4946358e39aa9a16365-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"b48b531cc340e4946358e39aa9a16365\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731814 kubelet[2407]: I1108 00:08:16.730627 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b48b531cc340e4946358e39aa9a16365-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"b48b531cc340e4946358e39aa9a16365\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731814 kubelet[2407]: I1108 00:08:16.730641 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.731814 kubelet[2407]: I1108 00:08:16.730655 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.732406 kubelet[2407]: E1108 00:08:16.732314 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3f5a11d2fe?timeout=10s\": dial tcp 138.199.234.199:6443: connect: connection refused" interval="400ms" Nov 8 00:08:16.889009 kubelet[2407]: I1108 00:08:16.888394 2407 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.889471 kubelet[2407]: E1108 00:08:16.889418 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.234.199:6443/api/v1/nodes\": dial tcp 138.199.234.199:6443: connect: connection refused" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:16.977848 containerd[1590]: time="2025-11-08T00:08:16.977749169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-3f5a11d2fe,Uid:b48b531cc340e4946358e39aa9a16365,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:16.980982 containerd[1590]: time="2025-11-08T00:08:16.980701558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe,Uid:d8e2914269ad590a0ec87be731362e28,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:16.983088 containerd[1590]: time="2025-11-08T00:08:16.982811282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-3f5a11d2fe,Uid:5a722946f6d98eca7ba448b846cf941d,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:17.133980 kubelet[2407]: E1108 00:08:17.133886 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3f5a11d2fe?timeout=10s\": dial tcp 138.199.234.199:6443: connect: connection refused" interval="800ms" Nov 8 00:08:17.292794 kubelet[2407]: I1108 00:08:17.292689 2407 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:17.294325 kubelet[2407]: E1108 00:08:17.294282 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.234.199:6443/api/v1/nodes\": dial tcp 138.199.234.199:6443: connect: connection refused" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:17.434994 kubelet[2407]: W1108 00:08:17.434887 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.234.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3f5a11d2fe&limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:17.434994 kubelet[2407]: E1108 00:08:17.434974 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.234.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3f5a11d2fe&limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:17.518166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961854704.mount: Deactivated successfully. Nov 8 00:08:17.522541 containerd[1590]: time="2025-11-08T00:08:17.522463517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:17.524216 containerd[1590]: time="2025-11-08T00:08:17.524108121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Nov 8 00:08:17.527283 containerd[1590]: time="2025-11-08T00:08:17.527220236Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:17.528492 containerd[1590]: time="2025-11-08T00:08:17.528376443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:17.529835 containerd[1590]: time="2025-11-08T00:08:17.529359597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:08:17.530750 containerd[1590]: time="2025-11-08T00:08:17.530716220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:17.532167 containerd[1590]: time="2025-11-08T00:08:17.532041360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:08:17.533168 containerd[1590]: time="2025-11-08T00:08:17.533070558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:17.534767 containerd[1590]: time="2025-11-08T00:08:17.534730483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.839474ms" Nov 8 00:08:17.537737 containerd[1590]: time="2025-11-08T00:08:17.536134109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.262531ms" Nov 8 00:08:17.537737 containerd[1590]: time="2025-11-08T00:08:17.537505933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.731569ms" Nov 8 00:08:17.573421 kubelet[2407]: W1108 00:08:17.572638 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.234.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:17.573421 kubelet[2407]: E1108 00:08:17.572702 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.234.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:17.661741 containerd[1590]: time="2025-11-08T00:08:17.661427251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:17.661741 containerd[1590]: time="2025-11-08T00:08:17.661500656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:17.661741 containerd[1590]: time="2025-11-08T00:08:17.661526738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:17.661741 containerd[1590]: time="2025-11-08T00:08:17.661629266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:17.672820 containerd[1590]: time="2025-11-08T00:08:17.672159221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:17.673093 containerd[1590]: time="2025-11-08T00:08:17.672684061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:17.673093 containerd[1590]: time="2025-11-08T00:08:17.672868074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:17.673753 containerd[1590]: time="2025-11-08T00:08:17.673561807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:17.675309 containerd[1590]: time="2025-11-08T00:08:17.675131045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:17.675309 containerd[1590]: time="2025-11-08T00:08:17.675194810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:17.675309 containerd[1590]: time="2025-11-08T00:08:17.675206571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:17.675540 containerd[1590]: time="2025-11-08T00:08:17.675292218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:17.712803 kubelet[2407]: W1108 00:08:17.712269 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.234.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:17.712803 kubelet[2407]: E1108 00:08:17.712643 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.234.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:17.758557 containerd[1590]: time="2025-11-08T00:08:17.758384492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-3f5a11d2fe,Uid:b48b531cc340e4946358e39aa9a16365,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed542f41d6a727f625e70c1932f8eeb61a30e945b539f81d8f7ea01e924f33f\"" Nov 8 00:08:17.764687 containerd[1590]: time="2025-11-08T00:08:17.764557598Z" level=info msg="CreateContainer within sandbox \"2ed542f41d6a727f625e70c1932f8eeb61a30e945b539f81d8f7ea01e924f33f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:08:17.772082 containerd[1590]: time="2025-11-08T00:08:17.772036563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-3f5a11d2fe,Uid:5a722946f6d98eca7ba448b846cf941d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a930222650e9866d7fe9ba427151a20a5f8eb13bec9878a4c12bbfb732701df\"" Nov 8 00:08:17.776906 containerd[1590]: time="2025-11-08T00:08:17.776837646Z" level=info msg="CreateContainer within sandbox \"4a930222650e9866d7fe9ba427151a20a5f8eb13bec9878a4c12bbfb732701df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:08:17.782738 containerd[1590]: time="2025-11-08T00:08:17.782701569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe,Uid:d8e2914269ad590a0ec87be731362e28,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ad405189f461cb48538076de94ec7d30c1c6ce10464ec8950d94880f15c939b\"" Nov 8 00:08:17.784161 containerd[1590]: time="2025-11-08T00:08:17.784133597Z" level=info msg="CreateContainer within sandbox \"2ed542f41d6a727f625e70c1932f8eeb61a30e945b539f81d8f7ea01e924f33f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"515482f93aa2adbc035b8cd203516e4b9bb2cfe5e33a91f02894decaea29915b\"" Nov 8 00:08:17.785268 containerd[1590]: time="2025-11-08T00:08:17.785046506Z" level=info msg="StartContainer for \"515482f93aa2adbc035b8cd203516e4b9bb2cfe5e33a91f02894decaea29915b\"" Nov 8 00:08:17.786076 containerd[1590]: time="2025-11-08T00:08:17.786031300Z" level=info msg="CreateContainer within sandbox \"9ad405189f461cb48538076de94ec7d30c1c6ce10464ec8950d94880f15c939b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:08:17.806682 containerd[1590]: time="2025-11-08T00:08:17.806628855Z" level=info msg="CreateContainer within sandbox \"4a930222650e9866d7fe9ba427151a20a5f8eb13bec9878a4c12bbfb732701df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b\"" Nov 8 00:08:17.810613 containerd[1590]: time="2025-11-08T00:08:17.809436988Z" level=info msg="StartContainer for \"cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b\"" Nov 8 00:08:17.812331 containerd[1590]: time="2025-11-08T00:08:17.812292603Z" level=info msg="CreateContainer within sandbox \"9ad405189f461cb48538076de94ec7d30c1c6ce10464ec8950d94880f15c939b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b\"" Nov 8 00:08:17.813558 containerd[1590]: time="2025-11-08T00:08:17.813513335Z" level=info msg="StartContainer for \"22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b\"" Nov 8 00:08:17.855861 kubelet[2407]: W1108 00:08:17.854020 2407 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.234.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.234.199:6443: connect: connection refused Nov 8 00:08:17.855861 kubelet[2407]: E1108 00:08:17.854156 2407 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.234.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.234.199:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:17.880980 containerd[1590]: time="2025-11-08T00:08:17.879649410Z" level=info msg="StartContainer for \"515482f93aa2adbc035b8cd203516e4b9bb2cfe5e33a91f02894decaea29915b\" returns successfully" Nov 8 00:08:17.938207 kubelet[2407]: E1108 00:08:17.935012 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3f5a11d2fe?timeout=10s\": dial tcp 138.199.234.199:6443: connect: connection refused" interval="1.6s" Nov 8 00:08:17.952584 containerd[1590]: time="2025-11-08T00:08:17.952485950Z" level=info msg="StartContainer for \"22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b\" returns successfully" Nov 8 00:08:17.966903 containerd[1590]: time="2025-11-08T00:08:17.966853915Z" level=info msg="StartContainer for \"cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b\" returns successfully" Nov 8 00:08:18.097796 kubelet[2407]: I1108 00:08:18.097757 2407 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:18.594712 kubelet[2407]: E1108 00:08:18.594669 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:18.598390 kubelet[2407]: E1108 00:08:18.598345 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:18.604030 kubelet[2407]: E1108 00:08:18.603438 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:19.605152 kubelet[2407]: E1108 00:08:19.605116 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:19.606034 kubelet[2407]: E1108 00:08:19.605975 2407 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.568815 kubelet[2407]: E1108 00:08:20.568766 2407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-3f5a11d2fe\" not found" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.732818 kubelet[2407]: I1108 00:08:20.732719 2407 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.828988 kubelet[2407]: I1108 00:08:20.828296 2407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.847932 kubelet[2407]: E1108 00:08:20.847890 2407 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.847932 kubelet[2407]: I1108 00:08:20.847929 2407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.852065 kubelet[2407]: E1108 00:08:20.851936 2407 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-3f5a11d2fe\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.852135 kubelet[2407]: I1108 00:08:20.852069 2407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:20.856933 kubelet[2407]: E1108 00:08:20.856894 2407 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:21.512093 kubelet[2407]: I1108 00:08:21.512046 2407 apiserver.go:52] "Watching apiserver" Nov 8 00:08:21.530638 kubelet[2407]: I1108 00:08:21.530421 2407 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:08:21.961625 kubelet[2407]: I1108 00:08:21.961355 2407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:22.905371 systemd[1]: Reloading requested from client PID 2679 ('systemctl') (unit session-7.scope)... Nov 8 00:08:22.905390 systemd[1]: Reloading... Nov 8 00:08:22.997013 zram_generator::config[2722]: No configuration found. Nov 8 00:08:23.113514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:08:23.206151 systemd[1]: Reloading finished in 300 ms. Nov 8 00:08:23.243155 kubelet[2407]: I1108 00:08:23.242981 2407 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:08:23.243539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:23.254300 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:08:23.254842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:23.273592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:23.427934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:23.435387 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:08:23.485327 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:23.485327 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:08:23.485327 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:23.485791 kubelet[2774]: I1108 00:08:23.485536 2774 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:08:23.496517 kubelet[2774]: I1108 00:08:23.496465 2774 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:08:23.496517 kubelet[2774]: I1108 00:08:23.496497 2774 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:08:23.502015 kubelet[2774]: I1108 00:08:23.501301 2774 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:08:23.504548 kubelet[2774]: I1108 00:08:23.504519 2774 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:08:23.508051 kubelet[2774]: I1108 00:08:23.508008 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:08:23.513449 kubelet[2774]: E1108 00:08:23.513393 2774 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:08:23.513647 kubelet[2774]: I1108 00:08:23.513626 2774 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:08:23.517017 kubelet[2774]: I1108 00:08:23.516992 2774 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:08:23.518030 kubelet[2774]: I1108 00:08:23.517996 2774 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:08:23.518292 kubelet[2774]: I1108 00:08:23.518118 2774 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-3f5a11d2fe","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518482 2774 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518501 2774 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518549 2774 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518769 2774 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518790 2774 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518831 2774 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:08:23.519813 kubelet[2774]: I1108 00:08:23.518843 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:08:23.521596 kubelet[2774]: I1108 00:08:23.521570 2774 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:08:23.522130 kubelet[2774]: I1108 00:08:23.522107 2774 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:08:23.522580 kubelet[2774]: I1108 00:08:23.522559 2774 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:08:23.522626 kubelet[2774]: I1108 00:08:23.522599 2774 server.go:1287] "Started kubelet" Nov 8 00:08:23.534861 kubelet[2774]: I1108 00:08:23.534790 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:08:23.540220 kubelet[2774]: I1108 00:08:23.540156 2774 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:08:23.543263 kubelet[2774]: I1108 00:08:23.543233 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:08:23.547718 kubelet[2774]: I1108 00:08:23.547691 2774 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:08:23.548302 kubelet[2774]: E1108 00:08:23.548280 2774 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3f5a11d2fe\" not found" Nov 8 00:08:23.549523 kubelet[2774]: I1108 00:08:23.549500 2774 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:08:23.549651 kubelet[2774]: I1108 00:08:23.549638 2774 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:08:23.550473 kubelet[2774]: I1108 00:08:23.550416 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:08:23.550666 kubelet[2774]: I1108 00:08:23.550647 2774 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:08:23.559037 kubelet[2774]: I1108 00:08:23.559008 2774 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:08:23.567035 kubelet[2774]: I1108 00:08:23.566857 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:08:23.568163 kubelet[2774]: I1108 00:08:23.568141 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:08:23.568696 kubelet[2774]: I1108 00:08:23.568266 2774 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:08:23.568696 kubelet[2774]: I1108 00:08:23.568291 2774 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:08:23.568696 kubelet[2774]: I1108 00:08:23.568302 2774 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:08:23.568696 kubelet[2774]: E1108 00:08:23.568523 2774 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:08:23.574528 kubelet[2774]: I1108 00:08:23.574498 2774 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:08:23.575164 kubelet[2774]: I1108 00:08:23.575139 2774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:08:23.580089 kubelet[2774]: E1108 00:08:23.579103 2774 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:08:23.581794 kubelet[2774]: I1108 00:08:23.581776 2774 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:08:23.644433 kubelet[2774]: I1108 00:08:23.644400 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:08:23.644433 kubelet[2774]: I1108 00:08:23.644425 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:08:23.644433 kubelet[2774]: I1108 00:08:23.644446 2774 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:23.644646 kubelet[2774]: I1108 00:08:23.644606 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:08:23.644646 kubelet[2774]: I1108 00:08:23.644617 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:08:23.644646 kubelet[2774]: I1108 00:08:23.644644 2774 policy_none.go:49] "None policy: Start" Nov 8 00:08:23.644750 kubelet[2774]: I1108 00:08:23.644652 2774 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:08:23.644750 kubelet[2774]: I1108 00:08:23.644662 2774 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:08:23.644849 kubelet[2774]: I1108 00:08:23.644751 2774 state_mem.go:75] "Updated machine memory state" Nov 8 00:08:23.646977 kubelet[2774]: I1108 00:08:23.646042 2774 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:08:23.646977 kubelet[2774]: I1108 00:08:23.646205 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:08:23.646977 kubelet[2774]: I1108 00:08:23.646216 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:08:23.647769 kubelet[2774]: I1108 00:08:23.647663 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:08:23.649008 kubelet[2774]: E1108 00:08:23.648988 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:08:23.669747 kubelet[2774]: I1108 00:08:23.669701 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.670161 kubelet[2774]: I1108 00:08:23.670125 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.670369 kubelet[2774]: I1108 00:08:23.669915 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.686851 kubelet[2774]: E1108 00:08:23.686747 2774 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.751410 kubelet[2774]: I1108 00:08:23.751266 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.768157 kubelet[2774]: I1108 00:08:23.767481 2774 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.768157 kubelet[2774]: I1108 00:08:23.767567 2774 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.851044 kubelet[2774]: I1108 00:08:23.850940 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.851601 kubelet[2774]: I1108 00:08:23.851358 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.851601 kubelet[2774]: I1108 00:08:23.851410 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.851601 kubelet[2774]: I1108 00:08:23.851491 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.851601 kubelet[2774]: I1108 00:08:23.851551 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a722946f6d98eca7ba448b846cf941d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"5a722946f6d98eca7ba448b846cf941d\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.852228 kubelet[2774]: I1108 00:08:23.851928 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b48b531cc340e4946358e39aa9a16365-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"b48b531cc340e4946358e39aa9a16365\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.852228 kubelet[2774]: I1108 00:08:23.852030 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b48b531cc340e4946358e39aa9a16365-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"b48b531cc340e4946358e39aa9a16365\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.852228 kubelet[2774]: I1108 00:08:23.852099 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b48b531cc340e4946358e39aa9a16365-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"b48b531cc340e4946358e39aa9a16365\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:23.852228 kubelet[2774]: I1108 00:08:23.852158 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8e2914269ad590a0ec87be731362e28-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe\" (UID: \"d8e2914269ad590a0ec87be731362e28\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:24.525133 kubelet[2774]: I1108 00:08:24.525086 2774 apiserver.go:52] "Watching apiserver" Nov 8 00:08:24.550775 kubelet[2774]: I1108 00:08:24.550720 2774 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:08:24.616975 kubelet[2774]: I1108 00:08:24.614709 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:24.617288 kubelet[2774]: I1108 00:08:24.617258 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:24.633317 kubelet[2774]: E1108 00:08:24.633287 2774 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3f5a11d2fe\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:24.633734 kubelet[2774]: E1108 00:08:24.633440 2774 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-3f5a11d2fe\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:08:24.658150 kubelet[2774]: I1108 00:08:24.658085 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3f5a11d2fe" podStartSLOduration=3.658050482 podStartE2EDuration="3.658050482s" podCreationTimestamp="2025-11-08 00:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:24.656232407 +0000 UTC m=+1.217174591" watchObservedRunningTime="2025-11-08 00:08:24.658050482 +0000 UTC m=+1.218992666" Nov 8 00:08:24.674261 kubelet[2774]: I1108 00:08:24.672768 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" podStartSLOduration=1.6727521749999998 podStartE2EDuration="1.672752175s" podCreationTimestamp="2025-11-08 00:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:24.672502599 +0000 UTC m=+1.233444743" watchObservedRunningTime="2025-11-08 00:08:24.672752175 +0000 UTC m=+1.233694359" Nov 8 00:08:28.916171 kubelet[2774]: I1108 00:08:28.915837 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3f5a11d2fe" podStartSLOduration=5.915813718 podStartE2EDuration="5.915813718s" podCreationTimestamp="2025-11-08 00:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:24.687293938 +0000 UTC m=+1.248236162" watchObservedRunningTime="2025-11-08 00:08:28.915813718 +0000 UTC m=+5.476755902" Nov 8 00:08:29.224834 kubelet[2774]: I1108 00:08:29.224782 2774 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:08:29.225860 containerd[1590]: time="2025-11-08T00:08:29.225818352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:08:29.226228 kubelet[2774]: I1108 00:08:29.226083 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:08:29.994809 kubelet[2774]: I1108 00:08:29.994629 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ecc7e026-a691-4eff-a230-e6cbcf67ef78-kube-proxy\") pod \"kube-proxy-vn557\" (UID: \"ecc7e026-a691-4eff-a230-e6cbcf67ef78\") " pod="kube-system/kube-proxy-vn557" Nov 8 00:08:29.994809 kubelet[2774]: I1108 00:08:29.994680 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecc7e026-a691-4eff-a230-e6cbcf67ef78-lib-modules\") pod \"kube-proxy-vn557\" (UID: \"ecc7e026-a691-4eff-a230-e6cbcf67ef78\") " pod="kube-system/kube-proxy-vn557" Nov 8 00:08:29.994809 kubelet[2774]: I1108 00:08:29.994699 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt96\" (UniqueName: \"kubernetes.io/projected/ecc7e026-a691-4eff-a230-e6cbcf67ef78-kube-api-access-zrt96\") pod \"kube-proxy-vn557\" (UID: \"ecc7e026-a691-4eff-a230-e6cbcf67ef78\") " pod="kube-system/kube-proxy-vn557" Nov 8 00:08:29.994809 kubelet[2774]: I1108 00:08:29.994722 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecc7e026-a691-4eff-a230-e6cbcf67ef78-xtables-lock\") pod \"kube-proxy-vn557\" (UID: \"ecc7e026-a691-4eff-a230-e6cbcf67ef78\") " pod="kube-system/kube-proxy-vn557" Nov 8 00:08:30.108939 kubelet[2774]: E1108 00:08:30.108870 2774 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 8 00:08:30.108939 kubelet[2774]: E1108 00:08:30.108908 2774 projected.go:194] Error preparing data for projected volume kube-api-access-zrt96 for pod kube-system/kube-proxy-vn557: configmap "kube-root-ca.crt" not found Nov 8 00:08:30.109118 kubelet[2774]: E1108 00:08:30.108991 2774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ecc7e026-a691-4eff-a230-e6cbcf67ef78-kube-api-access-zrt96 podName:ecc7e026-a691-4eff-a230-e6cbcf67ef78 nodeName:}" failed. No retries permitted until 2025-11-08 00:08:30.608968907 +0000 UTC m=+7.169911091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zrt96" (UniqueName: "kubernetes.io/projected/ecc7e026-a691-4eff-a230-e6cbcf67ef78-kube-api-access-zrt96") pod "kube-proxy-vn557" (UID: "ecc7e026-a691-4eff-a230-e6cbcf67ef78") : configmap "kube-root-ca.crt" not found Nov 8 00:08:30.362127 kubelet[2774]: I1108 00:08:30.361223 2774 status_manager.go:890] "Failed to get status for pod" podUID="c89d2e07-09a1-406e-a4e3-f5380e189f8f" pod="tigera-operator/tigera-operator-7dcd859c48-skkj7" err="pods \"tigera-operator-7dcd859c48-skkj7\" is forbidden: User \"system:node:ci-4081-3-6-n-3f5a11d2fe\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-6-n-3f5a11d2fe' and this object" Nov 8 00:08:30.362127 kubelet[2774]: W1108 00:08:30.361262 2774 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-6-n-3f5a11d2fe" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-6-n-3f5a11d2fe' and this object Nov 8 00:08:30.362127 kubelet[2774]: W1108 00:08:30.361307 2774 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-n-3f5a11d2fe" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-6-n-3f5a11d2fe' and this object Nov 8 00:08:30.362127 kubelet[2774]: E1108 00:08:30.361334 2774 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-n-3f5a11d2fe\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-6-n-3f5a11d2fe' and this object" logger="UnhandledError" Nov 8 00:08:30.362400 kubelet[2774]: E1108 00:08:30.361302 2774 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081-3-6-n-3f5a11d2fe\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-6-n-3f5a11d2fe' and this object" logger="UnhandledError" Nov 8 00:08:30.398337 kubelet[2774]: I1108 00:08:30.398214 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thtr\" (UniqueName: \"kubernetes.io/projected/c89d2e07-09a1-406e-a4e3-f5380e189f8f-kube-api-access-7thtr\") pod \"tigera-operator-7dcd859c48-skkj7\" (UID: \"c89d2e07-09a1-406e-a4e3-f5380e189f8f\") " pod="tigera-operator/tigera-operator-7dcd859c48-skkj7" Nov 8 00:08:30.398337 kubelet[2774]: I1108 00:08:30.398295 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c89d2e07-09a1-406e-a4e3-f5380e189f8f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-skkj7\" (UID: \"c89d2e07-09a1-406e-a4e3-f5380e189f8f\") " pod="tigera-operator/tigera-operator-7dcd859c48-skkj7" Nov 8 00:08:30.830417 containerd[1590]: time="2025-11-08T00:08:30.830359714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vn557,Uid:ecc7e026-a691-4eff-a230-e6cbcf67ef78,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:30.857588 containerd[1590]: time="2025-11-08T00:08:30.857020625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:30.857588 containerd[1590]: time="2025-11-08T00:08:30.857535534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:30.857588 containerd[1590]: time="2025-11-08T00:08:30.857549455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:30.861027 containerd[1590]: time="2025-11-08T00:08:30.857710064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:30.920116 containerd[1590]: time="2025-11-08T00:08:30.919999715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vn557,Uid:ecc7e026-a691-4eff-a230-e6cbcf67ef78,Namespace:kube-system,Attempt:0,} returns sandbox id \"34654e1e13c813d0c62d810887827ee95dc9aaf36c50c22a16916907fbf21dd9\"" Nov 8 00:08:30.923478 containerd[1590]: time="2025-11-08T00:08:30.923292501Z" level=info msg="CreateContainer within sandbox \"34654e1e13c813d0c62d810887827ee95dc9aaf36c50c22a16916907fbf21dd9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:08:30.943392 containerd[1590]: time="2025-11-08T00:08:30.942078366Z" level=info msg="CreateContainer within sandbox \"34654e1e13c813d0c62d810887827ee95dc9aaf36c50c22a16916907fbf21dd9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"035eacb3bb548dc9c015015d260e3f0393021facd2cf1b452822d1019441e2dc\"" Nov 8 00:08:30.943392 containerd[1590]: time="2025-11-08T00:08:30.943080023Z" level=info msg="StartContainer for \"035eacb3bb548dc9c015015d260e3f0393021facd2cf1b452822d1019441e2dc\"" Nov 8 00:08:31.005291 containerd[1590]: time="2025-11-08T00:08:31.005150937Z" level=info msg="StartContainer for \"035eacb3bb548dc9c015015d260e3f0393021facd2cf1b452822d1019441e2dc\" returns successfully" Nov 8 00:08:31.259860 containerd[1590]: time="2025-11-08T00:08:31.259324995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-skkj7,Uid:c89d2e07-09a1-406e-a4e3-f5380e189f8f,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:08:31.290791 containerd[1590]: time="2025-11-08T00:08:31.290525176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:31.290791 containerd[1590]: time="2025-11-08T00:08:31.290583019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:31.290791 containerd[1590]: time="2025-11-08T00:08:31.290594740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:31.290791 containerd[1590]: time="2025-11-08T00:08:31.290681785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:31.357604 containerd[1590]: time="2025-11-08T00:08:31.357331062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-skkj7,Uid:c89d2e07-09a1-406e-a4e3-f5380e189f8f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"69c8518138f8cbb681bfc963efe0cce0cb54711789fb185de7e04cd0e83922c5\"" Nov 8 00:08:31.361899 containerd[1590]: time="2025-11-08T00:08:31.361339046Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:08:31.650041 kubelet[2774]: I1108 00:08:31.649351 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vn557" podStartSLOduration=2.64932163 podStartE2EDuration="2.64932163s" podCreationTimestamp="2025-11-08 00:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:31.648649033 +0000 UTC m=+8.209591217" watchObservedRunningTime="2025-11-08 00:08:31.64932163 +0000 UTC m=+8.210263814" Nov 8 00:08:33.324931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778037729.mount: Deactivated successfully. Nov 8 00:08:34.468564 containerd[1590]: time="2025-11-08T00:08:34.468511196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:34.471169 containerd[1590]: time="2025-11-08T00:08:34.471136016Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:08:34.473017 containerd[1590]: time="2025-11-08T00:08:34.472981875Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:34.476760 containerd[1590]: time="2025-11-08T00:08:34.476663991Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:34.477824 containerd[1590]: time="2025-11-08T00:08:34.477785091Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.116394163s" Nov 8 00:08:34.477944 containerd[1590]: time="2025-11-08T00:08:34.477925899Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:08:34.482773 containerd[1590]: time="2025-11-08T00:08:34.482716115Z" level=info msg="CreateContainer within sandbox \"69c8518138f8cbb681bfc963efe0cce0cb54711789fb185de7e04cd0e83922c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:08:34.504543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3943180045.mount: Deactivated successfully. Nov 8 00:08:34.511254 containerd[1590]: time="2025-11-08T00:08:34.509945249Z" level=info msg="CreateContainer within sandbox \"69c8518138f8cbb681bfc963efe0cce0cb54711789fb185de7e04cd0e83922c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f\"" Nov 8 00:08:34.511766 containerd[1590]: time="2025-11-08T00:08:34.511696022Z" level=info msg="StartContainer for \"65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f\"" Nov 8 00:08:34.573581 containerd[1590]: time="2025-11-08T00:08:34.573533646Z" level=info msg="StartContainer for \"65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f\" returns successfully" Nov 8 00:08:34.663385 kubelet[2774]: I1108 00:08:34.663224 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-skkj7" podStartSLOduration=1.5434749700000001 podStartE2EDuration="4.663194875s" podCreationTimestamp="2025-11-08 00:08:30 +0000 UTC" firstStartedPulling="2025-11-08 00:08:31.359299772 +0000 UTC m=+7.920241956" lastFinishedPulling="2025-11-08 00:08:34.479019677 +0000 UTC m=+11.039961861" observedRunningTime="2025-11-08 00:08:34.661403659 +0000 UTC m=+11.222345883" watchObservedRunningTime="2025-11-08 00:08:34.663194875 +0000 UTC m=+11.224137099" Nov 8 00:08:41.011036 sudo[1902]: pam_unix(sudo:session): session closed for user root Nov 8 00:08:41.167795 sshd[1898]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:41.175873 systemd[1]: sshd@6-138.199.234.199:22-139.178.68.195:43686.service: Deactivated successfully. Nov 8 00:08:41.184858 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:08:41.197382 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:08:41.200852 systemd-logind[1565]: Removed session 7. Nov 8 00:08:53.048827 kubelet[2774]: I1108 00:08:53.048596 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtlj\" (UniqueName: \"kubernetes.io/projected/c914a40b-5c97-49a4-8aa4-6072ac34d767-kube-api-access-qbtlj\") pod \"calico-typha-594b898c65-6ff5k\" (UID: \"c914a40b-5c97-49a4-8aa4-6072ac34d767\") " pod="calico-system/calico-typha-594b898c65-6ff5k" Nov 8 00:08:53.048827 kubelet[2774]: I1108 00:08:53.048685 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c914a40b-5c97-49a4-8aa4-6072ac34d767-typha-certs\") pod \"calico-typha-594b898c65-6ff5k\" (UID: \"c914a40b-5c97-49a4-8aa4-6072ac34d767\") " pod="calico-system/calico-typha-594b898c65-6ff5k" Nov 8 00:08:53.050290 kubelet[2774]: I1108 00:08:53.049194 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c914a40b-5c97-49a4-8aa4-6072ac34d767-tigera-ca-bundle\") pod \"calico-typha-594b898c65-6ff5k\" (UID: \"c914a40b-5c97-49a4-8aa4-6072ac34d767\") " pod="calico-system/calico-typha-594b898c65-6ff5k" Nov 8 00:08:53.151989 kubelet[2774]: I1108 00:08:53.149566 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-flexvol-driver-host\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.151989 kubelet[2774]: I1108 00:08:53.149610 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-var-run-calico\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.151989 kubelet[2774]: I1108 00:08:53.149630 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-xtables-lock\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.151989 kubelet[2774]: I1108 00:08:53.149648 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-cni-bin-dir\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.151989 kubelet[2774]: I1108 00:08:53.149664 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/061881e4-239e-4f0b-b702-af9373c99c72-node-certs\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152231 kubelet[2774]: I1108 00:08:53.149680 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/061881e4-239e-4f0b-b702-af9373c99c72-tigera-ca-bundle\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152231 kubelet[2774]: I1108 00:08:53.149697 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-cni-net-dir\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152231 kubelet[2774]: I1108 00:08:53.149733 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-policysync\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152231 kubelet[2774]: I1108 00:08:53.149748 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-lib-modules\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152231 kubelet[2774]: I1108 00:08:53.149765 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-cni-log-dir\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152337 kubelet[2774]: I1108 00:08:53.149780 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/061881e4-239e-4f0b-b702-af9373c99c72-var-lib-calico\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.152337 kubelet[2774]: I1108 00:08:53.149795 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcr89\" (UniqueName: \"kubernetes.io/projected/061881e4-239e-4f0b-b702-af9373c99c72-kube-api-access-dcr89\") pod \"calico-node-vw56v\" (UID: \"061881e4-239e-4f0b-b702-af9373c99c72\") " pod="calico-system/calico-node-vw56v" Nov 8 00:08:53.254262 kubelet[2774]: E1108 00:08:53.254211 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.254262 kubelet[2774]: W1108 00:08:53.254250 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.254430 kubelet[2774]: E1108 00:08:53.254306 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.254884 kubelet[2774]: E1108 00:08:53.254842 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.254884 kubelet[2774]: W1108 00:08:53.254865 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.254999 kubelet[2774]: E1108 00:08:53.254978 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.255151 kubelet[2774]: E1108 00:08:53.255136 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.255151 kubelet[2774]: W1108 00:08:53.255149 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.255283 kubelet[2774]: E1108 00:08:53.255227 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.255414 kubelet[2774]: E1108 00:08:53.255401 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.255414 kubelet[2774]: W1108 00:08:53.255413 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.256057 kubelet[2774]: E1108 00:08:53.256033 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.256142 kubelet[2774]: E1108 00:08:53.256127 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.256173 kubelet[2774]: W1108 00:08:53.256142 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.256324 kubelet[2774]: E1108 00:08:53.256249 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.256402 kubelet[2774]: E1108 00:08:53.256391 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.256402 kubelet[2774]: W1108 00:08:53.256401 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.256489 kubelet[2774]: E1108 00:08:53.256476 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.256777 kubelet[2774]: E1108 00:08:53.256757 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.256777 kubelet[2774]: W1108 00:08:53.256775 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.257683 kubelet[2774]: E1108 00:08:53.257646 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.257930 kubelet[2774]: E1108 00:08:53.257908 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.257930 kubelet[2774]: W1108 00:08:53.257927 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.258027 kubelet[2774]: E1108 00:08:53.257946 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.258587 kubelet[2774]: E1108 00:08:53.258482 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.258587 kubelet[2774]: W1108 00:08:53.258499 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.258853 kubelet[2774]: E1108 00:08:53.258834 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.259546 kubelet[2774]: E1108 00:08:53.259527 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.259546 kubelet[2774]: W1108 00:08:53.259546 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.260209 kubelet[2774]: E1108 00:08:53.260191 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.260523 kubelet[2774]: E1108 00:08:53.260507 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.260564 kubelet[2774]: W1108 00:08:53.260523 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.260771 kubelet[2774]: E1108 00:08:53.260756 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.263253 kubelet[2774]: E1108 00:08:53.263152 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.264019 kubelet[2774]: W1108 00:08:53.263510 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.264019 kubelet[2774]: E1108 00:08:53.263576 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.264305 kubelet[2774]: E1108 00:08:53.264078 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.264305 kubelet[2774]: W1108 00:08:53.264089 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.264305 kubelet[2774]: E1108 00:08:53.264116 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.265073 kubelet[2774]: E1108 00:08:53.265044 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.265073 kubelet[2774]: W1108 00:08:53.265063 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.265211 kubelet[2774]: E1108 00:08:53.265131 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.267937 kubelet[2774]: E1108 00:08:53.265989 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.267937 kubelet[2774]: W1108 00:08:53.266012 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.267937 kubelet[2774]: E1108 00:08:53.266255 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.267937 kubelet[2774]: W1108 00:08:53.266266 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.267937 kubelet[2774]: E1108 00:08:53.266451 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.267937 kubelet[2774]: W1108 00:08:53.266461 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.267937 kubelet[2774]: E1108 00:08:53.266473 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.271941 kubelet[2774]: E1108 00:08:53.271906 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.272132 kubelet[2774]: E1108 00:08:53.272101 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.272132 kubelet[2774]: W1108 00:08:53.272128 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.272193 kubelet[2774]: E1108 00:08:53.272148 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.274002 kubelet[2774]: E1108 00:08:53.272116 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.274749 containerd[1590]: time="2025-11-08T00:08:53.274698140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-594b898c65-6ff5k,Uid:c914a40b-5c97-49a4-8aa4-6072ac34d767,Namespace:calico-system,Attempt:0,}" Nov 8 00:08:53.277987 kubelet[2774]: E1108 00:08:53.277292 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.277987 kubelet[2774]: W1108 00:08:53.277322 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.277987 kubelet[2774]: E1108 00:08:53.277345 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.335148 kubelet[2774]: E1108 00:08:53.333762 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:08:53.339428 kubelet[2774]: E1108 00:08:53.338082 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.339855 kubelet[2774]: W1108 00:08:53.339380 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.339855 kubelet[2774]: E1108 00:08:53.339462 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.341055 kubelet[2774]: E1108 00:08:53.340927 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.341676 kubelet[2774]: W1108 00:08:53.341465 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.341927 kubelet[2774]: E1108 00:08:53.341898 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.343780 kubelet[2774]: E1108 00:08:53.343751 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.344089 kubelet[2774]: W1108 00:08:53.344061 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.344237 kubelet[2774]: E1108 00:08:53.344097 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.344877 kubelet[2774]: E1108 00:08:53.344794 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.344877 kubelet[2774]: W1108 00:08:53.344811 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.344877 kubelet[2774]: E1108 00:08:53.344827 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.345790 kubelet[2774]: E1108 00:08:53.345649 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.345790 kubelet[2774]: W1108 00:08:53.345670 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.345790 kubelet[2774]: E1108 00:08:53.345792 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.347195 kubelet[2774]: E1108 00:08:53.346247 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.347195 kubelet[2774]: W1108 00:08:53.346783 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.347195 kubelet[2774]: E1108 00:08:53.346803 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.347195 kubelet[2774]: E1108 00:08:53.347049 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.347195 kubelet[2774]: W1108 00:08:53.347059 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.347195 kubelet[2774]: E1108 00:08:53.347068 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.348324 kubelet[2774]: E1108 00:08:53.347575 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.348324 kubelet[2774]: W1108 00:08:53.347593 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.348324 kubelet[2774]: E1108 00:08:53.347605 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.348324 kubelet[2774]: E1108 00:08:53.348010 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.348324 kubelet[2774]: W1108 00:08:53.348021 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.348324 kubelet[2774]: E1108 00:08:53.348032 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.348324 kubelet[2774]: E1108 00:08:53.348200 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.348324 kubelet[2774]: W1108 00:08:53.348208 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.348324 kubelet[2774]: E1108 00:08:53.348217 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.350874 kubelet[2774]: E1108 00:08:53.348533 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.350874 kubelet[2774]: W1108 00:08:53.348544 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.350874 kubelet[2774]: E1108 00:08:53.348593 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.350874 kubelet[2774]: E1108 00:08:53.349255 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.350874 kubelet[2774]: W1108 00:08:53.349268 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.350874 kubelet[2774]: E1108 00:08:53.349280 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.351175 containerd[1590]: time="2025-11-08T00:08:53.349715422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:53.351175 containerd[1590]: time="2025-11-08T00:08:53.349935941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:53.351175 containerd[1590]: time="2025-11-08T00:08:53.350148739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:53.352486 kubelet[2774]: E1108 00:08:53.351949 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.352486 kubelet[2774]: W1108 00:08:53.352024 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.352486 kubelet[2774]: E1108 00:08:53.352052 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.353261 kubelet[2774]: E1108 00:08:53.353241 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.353661 kubelet[2774]: W1108 00:08:53.353269 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.353661 kubelet[2774]: E1108 00:08:53.353284 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.353949 kubelet[2774]: E1108 00:08:53.353782 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.353949 kubelet[2774]: W1108 00:08:53.353799 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.353949 kubelet[2774]: E1108 00:08:53.353809 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.354380 containerd[1590]: time="2025-11-08T00:08:53.354205549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:53.354491 kubelet[2774]: E1108 00:08:53.354474 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.354491 kubelet[2774]: W1108 00:08:53.354489 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.354616 kubelet[2774]: E1108 00:08:53.354501 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.354832 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.355820 kubelet[2774]: W1108 00:08:53.354847 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.354893 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.355119 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.355820 kubelet[2774]: W1108 00:08:53.355128 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.355137 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.355295 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.355820 kubelet[2774]: W1108 00:08:53.355304 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.355312 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.355820 kubelet[2774]: E1108 00:08:53.355462 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.356166 kubelet[2774]: W1108 00:08:53.355470 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.356166 kubelet[2774]: E1108 00:08:53.355478 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.357131 kubelet[2774]: E1108 00:08:53.357102 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.357131 kubelet[2774]: W1108 00:08:53.357122 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.357248 kubelet[2774]: E1108 00:08:53.357138 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.357248 kubelet[2774]: I1108 00:08:53.357170 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttwg\" (UniqueName: \"kubernetes.io/projected/57f11a43-3690-45d9-8837-b8df56bb1a07-kube-api-access-rttwg\") pod \"csi-node-driver-6n7x9\" (UID: \"57f11a43-3690-45d9-8837-b8df56bb1a07\") " pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:08:53.357649 kubelet[2774]: E1108 00:08:53.357629 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.357649 kubelet[2774]: W1108 00:08:53.357647 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.357724 kubelet[2774]: E1108 00:08:53.357666 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.357724 kubelet[2774]: I1108 00:08:53.357686 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/57f11a43-3690-45d9-8837-b8df56bb1a07-registration-dir\") pod \"csi-node-driver-6n7x9\" (UID: \"57f11a43-3690-45d9-8837-b8df56bb1a07\") " pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:08:53.358084 kubelet[2774]: E1108 00:08:53.357993 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.358084 kubelet[2774]: W1108 00:08:53.358014 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.358696 kubelet[2774]: E1108 00:08:53.358485 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.359109 kubelet[2774]: W1108 00:08:53.358501 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.359109 kubelet[2774]: E1108 00:08:53.358795 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.359314 kubelet[2774]: E1108 00:08:53.359285 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.359478 kubelet[2774]: I1108 00:08:53.359323 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/57f11a43-3690-45d9-8837-b8df56bb1a07-varrun\") pod \"csi-node-driver-6n7x9\" (UID: \"57f11a43-3690-45d9-8837-b8df56bb1a07\") " pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:08:53.359632 kubelet[2774]: E1108 00:08:53.359589 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.359632 kubelet[2774]: W1108 00:08:53.359617 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.359809 kubelet[2774]: E1108 00:08:53.359795 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.367727 kubelet[2774]: E1108 00:08:53.366762 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.367727 kubelet[2774]: W1108 00:08:53.367338 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.367727 kubelet[2774]: E1108 00:08:53.367379 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.368794 kubelet[2774]: E1108 00:08:53.368632 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.368794 kubelet[2774]: W1108 00:08:53.368666 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.368794 kubelet[2774]: E1108 00:08:53.368717 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.369325 kubelet[2774]: E1108 00:08:53.369311 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.369444 kubelet[2774]: W1108 00:08:53.369384 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.369444 kubelet[2774]: E1108 00:08:53.369432 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.370465 kubelet[2774]: E1108 00:08:53.370338 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.370465 kubelet[2774]: W1108 00:08:53.370355 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.370465 kubelet[2774]: E1108 00:08:53.370400 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.370465 kubelet[2774]: I1108 00:08:53.370431 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57f11a43-3690-45d9-8837-b8df56bb1a07-kubelet-dir\") pod \"csi-node-driver-6n7x9\" (UID: \"57f11a43-3690-45d9-8837-b8df56bb1a07\") " pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:08:53.371044 kubelet[2774]: E1108 00:08:53.370924 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.371044 kubelet[2774]: W1108 00:08:53.370937 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.371044 kubelet[2774]: E1108 00:08:53.370949 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.371745 kubelet[2774]: E1108 00:08:53.371727 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.371892 kubelet[2774]: W1108 00:08:53.371807 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.371892 kubelet[2774]: E1108 00:08:53.371833 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.372236 kubelet[2774]: E1108 00:08:53.372210 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.372236 kubelet[2774]: W1108 00:08:53.372223 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.372465 kubelet[2774]: E1108 00:08:53.372352 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.373081 kubelet[2774]: E1108 00:08:53.373062 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.373794 kubelet[2774]: W1108 00:08:53.373599 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.373794 kubelet[2774]: E1108 00:08:53.373630 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.373794 kubelet[2774]: I1108 00:08:53.373658 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/57f11a43-3690-45d9-8837-b8df56bb1a07-socket-dir\") pod \"csi-node-driver-6n7x9\" (UID: \"57f11a43-3690-45d9-8837-b8df56bb1a07\") " pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:08:53.374039 kubelet[2774]: E1108 00:08:53.374024 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.374212 kubelet[2774]: W1108 00:08:53.374113 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.374212 kubelet[2774]: E1108 00:08:53.374130 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.374467 kubelet[2774]: E1108 00:08:53.374396 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.374467 kubelet[2774]: W1108 00:08:53.374406 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.374467 kubelet[2774]: E1108 00:08:53.374419 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.443515 containerd[1590]: time="2025-11-08T00:08:53.443414805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vw56v,Uid:061881e4-239e-4f0b-b702-af9373c99c72,Namespace:calico-system,Attempt:0,}" Nov 8 00:08:53.445867 containerd[1590]: time="2025-11-08T00:08:53.445818147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-594b898c65-6ff5k,Uid:c914a40b-5c97-49a4-8aa4-6072ac34d767,Namespace:calico-system,Attempt:0,} returns sandbox id \"814ea2ad0931e58d6d33f1e5e781f06c18e2ea76659098ac6a0c3c15a10a236b\"" Nov 8 00:08:53.454913 containerd[1590]: time="2025-11-08T00:08:53.454613122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:08:53.475093 kubelet[2774]: E1108 00:08:53.474938 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.475093 kubelet[2774]: W1108 00:08:53.475087 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.475269 kubelet[2774]: E1108 00:08:53.475112 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.475623 kubelet[2774]: E1108 00:08:53.475584 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.475623 kubelet[2774]: W1108 00:08:53.475611 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.475623 kubelet[2774]: E1108 00:08:53.475629 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.476277 kubelet[2774]: E1108 00:08:53.476006 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.476277 kubelet[2774]: W1108 00:08:53.476024 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.476277 kubelet[2774]: E1108 00:08:53.476055 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.476400 kubelet[2774]: E1108 00:08:53.476297 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.476400 kubelet[2774]: W1108 00:08:53.476306 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.476400 kubelet[2774]: E1108 00:08:53.476344 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.476643 kubelet[2774]: E1108 00:08:53.476624 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.476643 kubelet[2774]: W1108 00:08:53.476638 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.476718 kubelet[2774]: E1108 00:08:53.476658 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.476927 kubelet[2774]: E1108 00:08:53.476888 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.476927 kubelet[2774]: W1108 00:08:53.476921 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.477026 kubelet[2774]: E1108 00:08:53.476938 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.477237 kubelet[2774]: E1108 00:08:53.477190 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.477237 kubelet[2774]: W1108 00:08:53.477228 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.477313 kubelet[2774]: E1108 00:08:53.477245 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.478023 kubelet[2774]: E1108 00:08:53.477484 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.478023 kubelet[2774]: W1108 00:08:53.477500 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.478023 kubelet[2774]: E1108 00:08:53.477539 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.478023 kubelet[2774]: E1108 00:08:53.477794 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.478023 kubelet[2774]: W1108 00:08:53.477804 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.478023 kubelet[2774]: E1108 00:08:53.477901 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.478225 kubelet[2774]: E1108 00:08:53.478144 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.478225 kubelet[2774]: W1108 00:08:53.478198 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.478282 kubelet[2774]: E1108 00:08:53.478264 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.478478 kubelet[2774]: E1108 00:08:53.478450 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.478522 kubelet[2774]: W1108 00:08:53.478479 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.478746 kubelet[2774]: E1108 00:08:53.478573 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.478746 kubelet[2774]: E1108 00:08:53.478738 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.478806 kubelet[2774]: W1108 00:08:53.478747 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.478827 kubelet[2774]: E1108 00:08:53.478820 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.480031 kubelet[2774]: E1108 00:08:53.480003 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.480031 kubelet[2774]: W1108 00:08:53.480023 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.480785 kubelet[2774]: E1108 00:08:53.480753 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.481039 kubelet[2774]: E1108 00:08:53.481017 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.481039 kubelet[2774]: W1108 00:08:53.481032 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.481214 kubelet[2774]: E1108 00:08:53.481189 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.481530 kubelet[2774]: E1108 00:08:53.481504 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.481530 kubelet[2774]: W1108 00:08:53.481520 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.481735 kubelet[2774]: E1108 00:08:53.481712 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.483590 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486457 kubelet[2774]: W1108 00:08:53.483611 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.483835 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486457 kubelet[2774]: W1108 00:08:53.483844 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.483870 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.484024 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.484193 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486457 kubelet[2774]: W1108 00:08:53.484203 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.484220 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486457 kubelet[2774]: E1108 00:08:53.484461 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486792 kubelet[2774]: W1108 00:08:53.484471 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.486792 kubelet[2774]: E1108 00:08:53.484487 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486792 kubelet[2774]: E1108 00:08:53.484841 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486792 kubelet[2774]: W1108 00:08:53.484852 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.486792 kubelet[2774]: E1108 00:08:53.485215 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486792 kubelet[2774]: W1108 00:08:53.485224 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.486792 kubelet[2774]: E1108 00:08:53.485234 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486792 kubelet[2774]: E1108 00:08:53.484949 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.486792 kubelet[2774]: E1108 00:08:53.485844 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.486792 kubelet[2774]: W1108 00:08:53.485853 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.485968 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.486355 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.490781 kubelet[2774]: W1108 00:08:53.486365 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.486375 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.486786 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.490781 kubelet[2774]: W1108 00:08:53.486797 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.486975 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.487670 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.490781 kubelet[2774]: W1108 00:08:53.487683 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.490781 kubelet[2774]: E1108 00:08:53.487694 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.497890 containerd[1590]: time="2025-11-08T00:08:53.497779281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:53.498050 containerd[1590]: time="2025-11-08T00:08:53.497862200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:53.498050 containerd[1590]: time="2025-11-08T00:08:53.497878520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:53.499775 kubelet[2774]: E1108 00:08:53.499745 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:53.499775 kubelet[2774]: W1108 00:08:53.499771 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:53.500032 kubelet[2774]: E1108 00:08:53.499795 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:53.500116 containerd[1590]: time="2025-11-08T00:08:53.499917745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:53.555127 containerd[1590]: time="2025-11-08T00:08:53.555089255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vw56v,Uid:061881e4-239e-4f0b-b702-af9373c99c72,Namespace:calico-system,Attempt:0,} returns sandbox id \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\"" Nov 8 00:08:54.569653 kubelet[2774]: E1108 00:08:54.569563 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:08:55.011362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906998585.mount: Deactivated successfully. Nov 8 00:08:55.881247 containerd[1590]: time="2025-11-08T00:08:55.880479101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:55.881898 containerd[1590]: time="2025-11-08T00:08:55.881852174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:08:55.882611 containerd[1590]: time="2025-11-08T00:08:55.882456452Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:55.885524 containerd[1590]: time="2025-11-08T00:08:55.885214638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:55.886072 containerd[1590]: time="2025-11-08T00:08:55.886036674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.431371872s" Nov 8 00:08:55.886127 containerd[1590]: time="2025-11-08T00:08:55.886072314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:08:55.888792 containerd[1590]: time="2025-11-08T00:08:55.888325223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:08:55.903356 containerd[1590]: time="2025-11-08T00:08:55.903183872Z" level=info msg="CreateContainer within sandbox \"814ea2ad0931e58d6d33f1e5e781f06c18e2ea76659098ac6a0c3c15a10a236b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:08:55.923489 containerd[1590]: time="2025-11-08T00:08:55.923260536Z" level=info msg="CreateContainer within sandbox \"814ea2ad0931e58d6d33f1e5e781f06c18e2ea76659098ac6a0c3c15a10a236b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7f6d963b5dacde1e543bf7be7f87cf73d93579bb23d7ebec6bc94bb2e51bc23\"" Nov 8 00:08:55.927376 containerd[1590]: time="2025-11-08T00:08:55.927075997Z" level=info msg="StartContainer for \"c7f6d963b5dacde1e543bf7be7f87cf73d93579bb23d7ebec6bc94bb2e51bc23\"" Nov 8 00:08:55.996794 containerd[1590]: time="2025-11-08T00:08:55.996690463Z" level=info msg="StartContainer for \"c7f6d963b5dacde1e543bf7be7f87cf73d93579bb23d7ebec6bc94bb2e51bc23\" returns successfully" Nov 8 00:08:56.570015 kubelet[2774]: E1108 00:08:56.568858 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:08:56.730866 kubelet[2774]: I1108 00:08:56.729079 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-594b898c65-6ff5k" podStartSLOduration=2.2939751360000002 podStartE2EDuration="4.729055744s" podCreationTimestamp="2025-11-08 00:08:52 +0000 UTC" firstStartedPulling="2025-11-08 00:08:53.452399419 +0000 UTC m=+30.013341603" lastFinishedPulling="2025-11-08 00:08:55.887480027 +0000 UTC m=+32.448422211" observedRunningTime="2025-11-08 00:08:56.727658429 +0000 UTC m=+33.288600733" watchObservedRunningTime="2025-11-08 00:08:56.729055744 +0000 UTC m=+33.289997928" Nov 8 00:08:56.780575 kubelet[2774]: E1108 00:08:56.780185 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.780575 kubelet[2774]: W1108 00:08:56.780235 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.780575 kubelet[2774]: E1108 00:08:56.780266 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.781002 kubelet[2774]: E1108 00:08:56.780840 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.781002 kubelet[2774]: W1108 00:08:56.780860 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.781002 kubelet[2774]: E1108 00:08:56.780993 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.781941 kubelet[2774]: E1108 00:08:56.781413 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.781941 kubelet[2774]: W1108 00:08:56.781443 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.781941 kubelet[2774]: E1108 00:08:56.781465 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.781941 kubelet[2774]: E1108 00:08:56.781910 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.781941 kubelet[2774]: W1108 00:08:56.781929 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.782303 kubelet[2774]: E1108 00:08:56.781947 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.782303 kubelet[2774]: E1108 00:08:56.782206 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.782303 kubelet[2774]: W1108 00:08:56.782224 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.782303 kubelet[2774]: E1108 00:08:56.782237 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.782567 kubelet[2774]: E1108 00:08:56.782405 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.782567 kubelet[2774]: W1108 00:08:56.782413 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.782567 kubelet[2774]: E1108 00:08:56.782421 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.782740 kubelet[2774]: E1108 00:08:56.782611 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.782740 kubelet[2774]: W1108 00:08:56.782619 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.782740 kubelet[2774]: E1108 00:08:56.782629 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.782885 kubelet[2774]: E1108 00:08:56.782781 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.782885 kubelet[2774]: W1108 00:08:56.782790 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.782885 kubelet[2774]: E1108 00:08:56.782799 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.783066 kubelet[2774]: E1108 00:08:56.783005 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.783066 kubelet[2774]: W1108 00:08:56.783016 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.783066 kubelet[2774]: E1108 00:08:56.783025 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.783205 kubelet[2774]: E1108 00:08:56.783189 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.783250 kubelet[2774]: W1108 00:08:56.783205 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.783250 kubelet[2774]: E1108 00:08:56.783214 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783361 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.784375 kubelet[2774]: W1108 00:08:56.783376 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783384 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783573 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.784375 kubelet[2774]: W1108 00:08:56.783581 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783590 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783765 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.784375 kubelet[2774]: W1108 00:08:56.783773 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783781 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.784375 kubelet[2774]: E1108 00:08:56.783938 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.785418 kubelet[2774]: W1108 00:08:56.783946 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.785418 kubelet[2774]: E1108 00:08:56.783973 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.785418 kubelet[2774]: E1108 00:08:56.784130 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.785418 kubelet[2774]: W1108 00:08:56.784141 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.785418 kubelet[2774]: E1108 00:08:56.784149 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.801367 kubelet[2774]: E1108 00:08:56.801313 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.801549 kubelet[2774]: W1108 00:08:56.801369 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.801549 kubelet[2774]: E1108 00:08:56.801444 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.801894 kubelet[2774]: E1108 00:08:56.801871 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.802005 kubelet[2774]: W1108 00:08:56.801902 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.802005 kubelet[2774]: E1108 00:08:56.801944 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.802422 kubelet[2774]: E1108 00:08:56.802401 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.802462 kubelet[2774]: W1108 00:08:56.802427 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.802515 kubelet[2774]: E1108 00:08:56.802455 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.803251 kubelet[2774]: E1108 00:08:56.803165 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.803294 kubelet[2774]: W1108 00:08:56.803262 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.803389 kubelet[2774]: E1108 00:08:56.803315 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.803869 kubelet[2774]: E1108 00:08:56.803814 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.803970 kubelet[2774]: W1108 00:08:56.803878 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.804288 kubelet[2774]: E1108 00:08:56.804179 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.805275 kubelet[2774]: E1108 00:08:56.805252 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.805333 kubelet[2774]: W1108 00:08:56.805279 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.805427 kubelet[2774]: E1108 00:08:56.805403 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.805799 kubelet[2774]: E1108 00:08:56.805786 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.805867 kubelet[2774]: W1108 00:08:56.805803 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.805916 kubelet[2774]: E1108 00:08:56.805898 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.806095 kubelet[2774]: E1108 00:08:56.806080 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.806095 kubelet[2774]: W1108 00:08:56.806094 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.806227 kubelet[2774]: E1108 00:08:56.806183 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.806276 kubelet[2774]: E1108 00:08:56.806252 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.806276 kubelet[2774]: W1108 00:08:56.806261 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.806340 kubelet[2774]: E1108 00:08:56.806277 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.806473 kubelet[2774]: E1108 00:08:56.806456 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.806548 kubelet[2774]: W1108 00:08:56.806473 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.806548 kubelet[2774]: E1108 00:08:56.806489 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.806733 kubelet[2774]: E1108 00:08:56.806721 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.806763 kubelet[2774]: W1108 00:08:56.806734 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.806763 kubelet[2774]: E1108 00:08:56.806758 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.807389 kubelet[2774]: E1108 00:08:56.807368 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.807444 kubelet[2774]: W1108 00:08:56.807388 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.807571 kubelet[2774]: E1108 00:08:56.807488 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.807623 kubelet[2774]: E1108 00:08:56.807611 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.807623 kubelet[2774]: W1108 00:08:56.807621 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.807722 kubelet[2774]: E1108 00:08:56.807705 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.807867 kubelet[2774]: E1108 00:08:56.807854 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.807903 kubelet[2774]: W1108 00:08:56.807867 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.807903 kubelet[2774]: E1108 00:08:56.807886 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.808230 kubelet[2774]: E1108 00:08:56.808214 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.808230 kubelet[2774]: W1108 00:08:56.808230 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.808303 kubelet[2774]: E1108 00:08:56.808244 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.808506 kubelet[2774]: E1108 00:08:56.808478 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.808506 kubelet[2774]: W1108 00:08:56.808504 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.808565 kubelet[2774]: E1108 00:08:56.808524 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.808890 kubelet[2774]: E1108 00:08:56.808876 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.808929 kubelet[2774]: W1108 00:08:56.808890 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.808929 kubelet[2774]: E1108 00:08:56.808902 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:56.809138 kubelet[2774]: E1108 00:08:56.809119 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:08:56.809166 kubelet[2774]: W1108 00:08:56.809137 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:08:56.809166 kubelet[2774]: E1108 00:08:56.809147 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:08:57.323081 containerd[1590]: time="2025-11-08T00:08:57.322988473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:57.327846 containerd[1590]: time="2025-11-08T00:08:57.324551909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:08:57.327846 containerd[1590]: time="2025-11-08T00:08:57.325680786Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:57.328593 containerd[1590]: time="2025-11-08T00:08:57.328526100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:57.330462 containerd[1590]: time="2025-11-08T00:08:57.330316136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.4404084s" Nov 8 00:08:57.330462 containerd[1590]: time="2025-11-08T00:08:57.330357936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:08:57.333630 containerd[1590]: time="2025-11-08T00:08:57.333588088Z" level=info msg="CreateContainer within sandbox \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:08:57.351572 containerd[1590]: time="2025-11-08T00:08:57.351503566Z" level=info msg="CreateContainer within sandbox \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1f42fbaac0954627f86b40e20492066d6cf0d64c611d65f18092069fd25e2b0f\"" Nov 8 00:08:57.353176 containerd[1590]: time="2025-11-08T00:08:57.353021963Z" level=info msg="StartContainer for \"1f42fbaac0954627f86b40e20492066d6cf0d64c611d65f18092069fd25e2b0f\"" Nov 8 00:08:57.420339 containerd[1590]: time="2025-11-08T00:08:57.420215167Z" level=info msg="StartContainer for \"1f42fbaac0954627f86b40e20492066d6cf0d64c611d65f18092069fd25e2b0f\" returns successfully" Nov 8 00:08:57.469136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f42fbaac0954627f86b40e20492066d6cf0d64c611d65f18092069fd25e2b0f-rootfs.mount: Deactivated successfully. Nov 8 00:08:57.577228 containerd[1590]: time="2025-11-08T00:08:57.577079322Z" level=info msg="shim disconnected" id=1f42fbaac0954627f86b40e20492066d6cf0d64c611d65f18092069fd25e2b0f namespace=k8s.io Nov 8 00:08:57.577228 containerd[1590]: time="2025-11-08T00:08:57.577139242Z" level=warning msg="cleaning up after shim disconnected" id=1f42fbaac0954627f86b40e20492066d6cf0d64c611d65f18092069fd25e2b0f namespace=k8s.io Nov 8 00:08:57.577228 containerd[1590]: time="2025-11-08T00:08:57.577150322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:08:57.718089 kubelet[2774]: I1108 00:08:57.717135 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:08:57.721307 containerd[1590]: time="2025-11-08T00:08:57.721260227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:08:58.569241 kubelet[2774]: E1108 00:08:58.569161 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:00.511581 containerd[1590]: time="2025-11-08T00:09:00.511508673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:00.513042 containerd[1590]: time="2025-11-08T00:09:00.512997794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:09:00.516225 containerd[1590]: time="2025-11-08T00:09:00.514343476Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:00.518321 containerd[1590]: time="2025-11-08T00:09:00.518286720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:00.519604 containerd[1590]: time="2025-11-08T00:09:00.519529321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.798119294s" Nov 8 00:09:00.519604 containerd[1590]: time="2025-11-08T00:09:00.519584242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:09:00.524702 containerd[1590]: time="2025-11-08T00:09:00.524658247Z" level=info msg="CreateContainer within sandbox \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:09:00.542886 containerd[1590]: time="2025-11-08T00:09:00.542625547Z" level=info msg="CreateContainer within sandbox \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aec43041e72a23b9867873e456a33828e3d31fcfd42006a1bd9c717f36741a39\"" Nov 8 00:09:00.543537 containerd[1590]: time="2025-11-08T00:09:00.543504188Z" level=info msg="StartContainer for \"aec43041e72a23b9867873e456a33828e3d31fcfd42006a1bd9c717f36741a39\"" Nov 8 00:09:00.569881 kubelet[2774]: E1108 00:09:00.569506 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:00.614643 containerd[1590]: time="2025-11-08T00:09:00.614526907Z" level=info msg="StartContainer for \"aec43041e72a23b9867873e456a33828e3d31fcfd42006a1bd9c717f36741a39\" returns successfully" Nov 8 00:09:01.174173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aec43041e72a23b9867873e456a33828e3d31fcfd42006a1bd9c717f36741a39-rootfs.mount: Deactivated successfully. Nov 8 00:09:01.197644 kubelet[2774]: I1108 00:09:01.197560 2774 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:09:01.256920 containerd[1590]: time="2025-11-08T00:09:01.256688532Z" level=info msg="shim disconnected" id=aec43041e72a23b9867873e456a33828e3d31fcfd42006a1bd9c717f36741a39 namespace=k8s.io Nov 8 00:09:01.256920 containerd[1590]: time="2025-11-08T00:09:01.256744372Z" level=warning msg="cleaning up after shim disconnected" id=aec43041e72a23b9867873e456a33828e3d31fcfd42006a1bd9c717f36741a39 namespace=k8s.io Nov 8 00:09:01.256920 containerd[1590]: time="2025-11-08T00:09:01.256752732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:09:01.336126 kubelet[2774]: I1108 00:09:01.335694 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02c3f902-7bf4-4824-923c-48ba4e1e389c-calico-apiserver-certs\") pod \"calico-apiserver-77bf6dfcdd-hptwz\" (UID: \"02c3f902-7bf4-4824-923c-48ba4e1e389c\") " pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" Nov 8 00:09:01.336126 kubelet[2774]: I1108 00:09:01.335740 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f7d776-6bd7-4d33-8b73-a5febd833bf0-goldmane-ca-bundle\") pod \"goldmane-666569f655-l7kch\" (UID: \"72f7d776-6bd7-4d33-8b73-a5febd833bf0\") " pod="calico-system/goldmane-666569f655-l7kch" Nov 8 00:09:01.336126 kubelet[2774]: I1108 00:09:01.335762 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ec2eb36-2470-4386-96a3-fe6dd8fc602f-calico-apiserver-certs\") pod \"calico-apiserver-78c5874598-gtq72\" (UID: \"7ec2eb36-2470-4386-96a3-fe6dd8fc602f\") " pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" Nov 8 00:09:01.336126 kubelet[2774]: I1108 00:09:01.335784 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29lz6\" (UniqueName: \"kubernetes.io/projected/7ec2eb36-2470-4386-96a3-fe6dd8fc602f-kube-api-access-29lz6\") pod \"calico-apiserver-78c5874598-gtq72\" (UID: \"7ec2eb36-2470-4386-96a3-fe6dd8fc602f\") " pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" Nov 8 00:09:01.336126 kubelet[2774]: I1108 00:09:01.335906 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffefce46-3638-4c95-bed3-200605f5f8d9-calico-apiserver-certs\") pod \"calico-apiserver-78c5874598-mj8jx\" (UID: \"ffefce46-3638-4c95-bed3-200605f5f8d9\") " pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" Nov 8 00:09:01.336398 kubelet[2774]: I1108 00:09:01.335930 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdmc\" (UniqueName: \"kubernetes.io/projected/ffefce46-3638-4c95-bed3-200605f5f8d9-kube-api-access-lhdmc\") pod \"calico-apiserver-78c5874598-mj8jx\" (UID: \"ffefce46-3638-4c95-bed3-200605f5f8d9\") " pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" Nov 8 00:09:01.337475 kubelet[2774]: I1108 00:09:01.335948 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/72f7d776-6bd7-4d33-8b73-a5febd833bf0-goldmane-key-pair\") pod \"goldmane-666569f655-l7kch\" (UID: \"72f7d776-6bd7-4d33-8b73-a5febd833bf0\") " pod="calico-system/goldmane-666569f655-l7kch" Nov 8 00:09:01.337475 kubelet[2774]: I1108 00:09:01.337101 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-backend-key-pair\") pod \"whisker-68f9d864b5-mqg6n\" (UID: \"c7626e06-780f-4455-a129-cde6e36c9bf2\") " pod="calico-system/whisker-68f9d864b5-mqg6n" Nov 8 00:09:01.338253 kubelet[2774]: I1108 00:09:01.337742 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9506755c-a0f4-47f2-b269-7090f44df783-config-volume\") pod \"coredns-668d6bf9bc-tcqxl\" (UID: \"9506755c-a0f4-47f2-b269-7090f44df783\") " pod="kube-system/coredns-668d6bf9bc-tcqxl" Nov 8 00:09:01.350980 kubelet[2774]: I1108 00:09:01.337777 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f7d776-6bd7-4d33-8b73-a5febd833bf0-config\") pod \"goldmane-666569f655-l7kch\" (UID: \"72f7d776-6bd7-4d33-8b73-a5febd833bf0\") " pod="calico-system/goldmane-666569f655-l7kch" Nov 8 00:09:01.350980 kubelet[2774]: I1108 00:09:01.350345 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwxxs\" (UniqueName: \"kubernetes.io/projected/72f7d776-6bd7-4d33-8b73-a5febd833bf0-kube-api-access-fwxxs\") pod \"goldmane-666569f655-l7kch\" (UID: \"72f7d776-6bd7-4d33-8b73-a5febd833bf0\") " pod="calico-system/goldmane-666569f655-l7kch" Nov 8 00:09:01.350980 kubelet[2774]: I1108 00:09:01.350370 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-ca-bundle\") pod \"whisker-68f9d864b5-mqg6n\" (UID: \"c7626e06-780f-4455-a129-cde6e36c9bf2\") " pod="calico-system/whisker-68f9d864b5-mqg6n" Nov 8 00:09:01.350980 kubelet[2774]: I1108 00:09:01.350391 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xpb\" (UniqueName: \"kubernetes.io/projected/02c3f902-7bf4-4824-923c-48ba4e1e389c-kube-api-access-94xpb\") pod \"calico-apiserver-77bf6dfcdd-hptwz\" (UID: \"02c3f902-7bf4-4824-923c-48ba4e1e389c\") " pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" Nov 8 00:09:01.350980 kubelet[2774]: I1108 00:09:01.350421 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87gqc\" (UniqueName: \"kubernetes.io/projected/c7626e06-780f-4455-a129-cde6e36c9bf2-kube-api-access-87gqc\") pod \"whisker-68f9d864b5-mqg6n\" (UID: \"c7626e06-780f-4455-a129-cde6e36c9bf2\") " pod="calico-system/whisker-68f9d864b5-mqg6n" Nov 8 00:09:01.351318 kubelet[2774]: I1108 00:09:01.350438 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkt76\" (UniqueName: \"kubernetes.io/projected/9506755c-a0f4-47f2-b269-7090f44df783-kube-api-access-jkt76\") pod \"coredns-668d6bf9bc-tcqxl\" (UID: \"9506755c-a0f4-47f2-b269-7090f44df783\") " pod="kube-system/coredns-668d6bf9bc-tcqxl" Nov 8 00:09:01.450886 kubelet[2774]: I1108 00:09:01.450818 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c488e78-d3ba-4197-ab37-75734ccb9129-tigera-ca-bundle\") pod \"calico-kube-controllers-54b5cccc46-bcg68\" (UID: \"5c488e78-d3ba-4197-ab37-75734ccb9129\") " pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" Nov 8 00:09:01.450886 kubelet[2774]: I1108 00:09:01.450895 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pghwl\" (UniqueName: \"kubernetes.io/projected/5c488e78-d3ba-4197-ab37-75734ccb9129-kube-api-access-pghwl\") pod \"calico-kube-controllers-54b5cccc46-bcg68\" (UID: \"5c488e78-d3ba-4197-ab37-75734ccb9129\") " pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" Nov 8 00:09:01.451106 kubelet[2774]: I1108 00:09:01.450986 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8-config-volume\") pod \"coredns-668d6bf9bc-fsp2l\" (UID: \"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8\") " pod="kube-system/coredns-668d6bf9bc-fsp2l" Nov 8 00:09:01.451106 kubelet[2774]: I1108 00:09:01.451046 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftz8n\" (UniqueName: \"kubernetes.io/projected/e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8-kube-api-access-ftz8n\") pod \"coredns-668d6bf9bc-fsp2l\" (UID: \"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8\") " pod="kube-system/coredns-668d6bf9bc-fsp2l" Nov 8 00:09:01.618193 containerd[1590]: time="2025-11-08T00:09:01.617941959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-mj8jx,Uid:ffefce46-3638-4c95-bed3-200605f5f8d9,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:09:01.631867 containerd[1590]: time="2025-11-08T00:09:01.631513508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-gtq72,Uid:7ec2eb36-2470-4386-96a3-fe6dd8fc602f,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:09:01.667578 containerd[1590]: time="2025-11-08T00:09:01.666738745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf6dfcdd-hptwz,Uid:02c3f902-7bf4-4824-923c-48ba4e1e389c,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:09:01.668379 containerd[1590]: time="2025-11-08T00:09:01.668342109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tcqxl,Uid:9506755c-a0f4-47f2-b269-7090f44df783,Namespace:kube-system,Attempt:0,}" Nov 8 00:09:01.668941 containerd[1590]: time="2025-11-08T00:09:01.668393869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fsp2l,Uid:e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8,Namespace:kube-system,Attempt:0,}" Nov 8 00:09:01.678435 containerd[1590]: time="2025-11-08T00:09:01.678389371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f9d864b5-mqg6n,Uid:c7626e06-780f-4455-a129-cde6e36c9bf2,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:01.686178 containerd[1590]: time="2025-11-08T00:09:01.686135827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b5cccc46-bcg68,Uid:5c488e78-d3ba-4197-ab37-75734ccb9129,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:01.692264 containerd[1590]: time="2025-11-08T00:09:01.692136881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l7kch,Uid:72f7d776-6bd7-4d33-8b73-a5febd833bf0,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:01.739248 containerd[1590]: time="2025-11-08T00:09:01.738620542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:09:01.858583 containerd[1590]: time="2025-11-08T00:09:01.858523603Z" level=error msg="Failed to destroy network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.859407 containerd[1590]: time="2025-11-08T00:09:01.859362445Z" level=error msg="encountered an error cleaning up failed sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.859506 containerd[1590]: time="2025-11-08T00:09:01.859429485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-gtq72,Uid:7ec2eb36-2470-4386-96a3-fe6dd8fc602f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.860180 kubelet[2774]: E1108 00:09:01.859721 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.860180 kubelet[2774]: E1108 00:09:01.859798 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" Nov 8 00:09:01.860180 kubelet[2774]: E1108 00:09:01.859828 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" Nov 8 00:09:01.860722 kubelet[2774]: E1108 00:09:01.859873 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:01.894626 containerd[1590]: time="2025-11-08T00:09:01.894558882Z" level=error msg="Failed to destroy network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.897728 containerd[1590]: time="2025-11-08T00:09:01.897649208Z" level=error msg="encountered an error cleaning up failed sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.898236 containerd[1590]: time="2025-11-08T00:09:01.898198170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-mj8jx,Uid:ffefce46-3638-4c95-bed3-200605f5f8d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.899690 kubelet[2774]: E1108 00:09:01.899237 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.899690 kubelet[2774]: E1108 00:09:01.899326 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" Nov 8 00:09:01.899690 kubelet[2774]: E1108 00:09:01.899360 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" Nov 8 00:09:01.899993 kubelet[2774]: E1108 00:09:01.899406 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:01.917130 containerd[1590]: time="2025-11-08T00:09:01.917081051Z" level=error msg="Failed to destroy network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.919256 containerd[1590]: time="2025-11-08T00:09:01.919207255Z" level=error msg="encountered an error cleaning up failed sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.919457 containerd[1590]: time="2025-11-08T00:09:01.919396216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tcqxl,Uid:9506755c-a0f4-47f2-b269-7090f44df783,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.920769 kubelet[2774]: E1108 00:09:01.920177 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.920769 kubelet[2774]: E1108 00:09:01.920238 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tcqxl" Nov 8 00:09:01.920769 kubelet[2774]: E1108 00:09:01.920258 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tcqxl" Nov 8 00:09:01.921103 kubelet[2774]: E1108 00:09:01.920294 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tcqxl_kube-system(9506755c-a0f4-47f2-b269-7090f44df783)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tcqxl_kube-system(9506755c-a0f4-47f2-b269-7090f44df783)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tcqxl" podUID="9506755c-a0f4-47f2-b269-7090f44df783" Nov 8 00:09:01.947352 containerd[1590]: time="2025-11-08T00:09:01.947301877Z" level=error msg="Failed to destroy network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.948710 containerd[1590]: time="2025-11-08T00:09:01.948368239Z" level=error msg="encountered an error cleaning up failed sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.948710 containerd[1590]: time="2025-11-08T00:09:01.948431159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fsp2l,Uid:e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.948937 kubelet[2774]: E1108 00:09:01.948690 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.949308 kubelet[2774]: E1108 00:09:01.948916 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fsp2l" Nov 8 00:09:01.949308 kubelet[2774]: E1108 00:09:01.949056 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fsp2l" Nov 8 00:09:01.949308 kubelet[2774]: E1108 00:09:01.949208 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fsp2l_kube-system(e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fsp2l_kube-system(e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fsp2l" podUID="e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8" Nov 8 00:09:01.992937 containerd[1590]: time="2025-11-08T00:09:01.992794496Z" level=error msg="Failed to destroy network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.993403 containerd[1590]: time="2025-11-08T00:09:01.993227737Z" level=error msg="encountered an error cleaning up failed sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.993403 containerd[1590]: time="2025-11-08T00:09:01.993275377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf6dfcdd-hptwz,Uid:02c3f902-7bf4-4824-923c-48ba4e1e389c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.994905 kubelet[2774]: E1108 00:09:01.994508 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:01.994905 kubelet[2774]: E1108 00:09:01.994575 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" Nov 8 00:09:01.994905 kubelet[2774]: E1108 00:09:01.994603 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" Nov 8 00:09:01.995192 kubelet[2774]: E1108 00:09:01.994653 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bf6dfcdd-hptwz_calico-apiserver(02c3f902-7bf4-4824-923c-48ba4e1e389c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bf6dfcdd-hptwz_calico-apiserver(02c3f902-7bf4-4824-923c-48ba4e1e389c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:09:02.027130 containerd[1590]: time="2025-11-08T00:09:02.026993077Z" level=error msg="Failed to destroy network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.028554 containerd[1590]: time="2025-11-08T00:09:02.027565879Z" level=error msg="encountered an error cleaning up failed sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.028906 containerd[1590]: time="2025-11-08T00:09:02.028873083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f9d864b5-mqg6n,Uid:c7626e06-780f-4455-a129-cde6e36c9bf2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.029172 containerd[1590]: time="2025-11-08T00:09:02.028803843Z" level=error msg="Failed to destroy network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.029555 kubelet[2774]: E1108 00:09:02.029399 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.029990 kubelet[2774]: E1108 00:09:02.029528 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68f9d864b5-mqg6n" Nov 8 00:09:02.029990 kubelet[2774]: E1108 00:09:02.029662 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68f9d864b5-mqg6n" Nov 8 00:09:02.029990 kubelet[2774]: E1108 00:09:02.029714 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68f9d864b5-mqg6n_calico-system(c7626e06-780f-4455-a129-cde6e36c9bf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68f9d864b5-mqg6n_calico-system(c7626e06-780f-4455-a129-cde6e36c9bf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68f9d864b5-mqg6n" podUID="c7626e06-780f-4455-a129-cde6e36c9bf2" Nov 8 00:09:02.030781 containerd[1590]: time="2025-11-08T00:09:02.030746089Z" level=error msg="encountered an error cleaning up failed sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.030945 containerd[1590]: time="2025-11-08T00:09:02.030904250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b5cccc46-bcg68,Uid:5c488e78-d3ba-4197-ab37-75734ccb9129,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.031558 kubelet[2774]: E1108 00:09:02.031518 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.031624 kubelet[2774]: E1108 00:09:02.031580 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" Nov 8 00:09:02.031624 kubelet[2774]: E1108 00:09:02.031600 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" Nov 8 00:09:02.031672 kubelet[2774]: E1108 00:09:02.031639 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54b5cccc46-bcg68_calico-system(5c488e78-d3ba-4197-ab37-75734ccb9129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54b5cccc46-bcg68_calico-system(5c488e78-d3ba-4197-ab37-75734ccb9129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:02.037172 containerd[1590]: time="2025-11-08T00:09:02.037124870Z" level=error msg="Failed to destroy network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.037495 containerd[1590]: time="2025-11-08T00:09:02.037465631Z" level=error msg="encountered an error cleaning up failed sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.037547 containerd[1590]: time="2025-11-08T00:09:02.037515151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l7kch,Uid:72f7d776-6bd7-4d33-8b73-a5febd833bf0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.037869 kubelet[2774]: E1108 00:09:02.037831 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.037993 kubelet[2774]: E1108 00:09:02.037893 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-l7kch" Nov 8 00:09:02.037993 kubelet[2774]: E1108 00:09:02.037913 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-l7kch" Nov 8 00:09:02.038077 kubelet[2774]: E1108 00:09:02.037990 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:02.541299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6-shm.mount: Deactivated successfully. Nov 8 00:09:02.574741 containerd[1590]: time="2025-11-08T00:09:02.574674560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6n7x9,Uid:57f11a43-3690-45d9-8837-b8df56bb1a07,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:02.643604 containerd[1590]: time="2025-11-08T00:09:02.643538062Z" level=error msg="Failed to destroy network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.646324 containerd[1590]: time="2025-11-08T00:09:02.646263991Z" level=error msg="encountered an error cleaning up failed sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.646463 containerd[1590]: time="2025-11-08T00:09:02.646347111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6n7x9,Uid:57f11a43-3690-45d9-8837-b8df56bb1a07,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.646711 kubelet[2774]: E1108 00:09:02.646642 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.647029 kubelet[2774]: E1108 00:09:02.646737 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:09:02.647029 kubelet[2774]: E1108 00:09:02.646764 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6n7x9" Nov 8 00:09:02.647573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e-shm.mount: Deactivated successfully. Nov 8 00:09:02.648101 kubelet[2774]: E1108 00:09:02.648031 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:02.739613 kubelet[2774]: I1108 00:09:02.739577 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:02.742325 containerd[1590]: time="2025-11-08T00:09:02.742026179Z" level=info msg="StopPodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\"" Nov 8 00:09:02.742325 containerd[1590]: time="2025-11-08T00:09:02.742214259Z" level=info msg="Ensure that sandbox b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04 in task-service has been cleanup successfully" Nov 8 00:09:02.742615 kubelet[2774]: I1108 00:09:02.742244 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:02.744311 containerd[1590]: time="2025-11-08T00:09:02.743881425Z" level=info msg="StopPodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\"" Nov 8 00:09:02.744311 containerd[1590]: time="2025-11-08T00:09:02.744068345Z" level=info msg="Ensure that sandbox fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c in task-service has been cleanup successfully" Nov 8 00:09:02.750311 kubelet[2774]: I1108 00:09:02.750164 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:02.751741 containerd[1590]: time="2025-11-08T00:09:02.751524049Z" level=info msg="StopPodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\"" Nov 8 00:09:02.754978 containerd[1590]: time="2025-11-08T00:09:02.754894340Z" level=info msg="Ensure that sandbox 4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed in task-service has been cleanup successfully" Nov 8 00:09:02.756890 kubelet[2774]: I1108 00:09:02.756353 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:02.758112 containerd[1590]: time="2025-11-08T00:09:02.757773509Z" level=info msg="StopPodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\"" Nov 8 00:09:02.759353 containerd[1590]: time="2025-11-08T00:09:02.759096274Z" level=info msg="Ensure that sandbox 68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035 in task-service has been cleanup successfully" Nov 8 00:09:02.765275 kubelet[2774]: I1108 00:09:02.764815 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:02.766074 containerd[1590]: time="2025-11-08T00:09:02.766014136Z" level=info msg="StopPodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\"" Nov 8 00:09:02.771529 containerd[1590]: time="2025-11-08T00:09:02.771453714Z" level=info msg="Ensure that sandbox 949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7 in task-service has been cleanup successfully" Nov 8 00:09:02.780043 kubelet[2774]: I1108 00:09:02.779305 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:02.785056 containerd[1590]: time="2025-11-08T00:09:02.784939197Z" level=info msg="StopPodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\"" Nov 8 00:09:02.789748 kubelet[2774]: I1108 00:09:02.789701 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:02.793051 containerd[1590]: time="2025-11-08T00:09:02.792875502Z" level=info msg="StopPodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\"" Nov 8 00:09:02.793180 containerd[1590]: time="2025-11-08T00:09:02.793090663Z" level=info msg="Ensure that sandbox 9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e in task-service has been cleanup successfully" Nov 8 00:09:02.794871 containerd[1590]: time="2025-11-08T00:09:02.794466308Z" level=info msg="Ensure that sandbox 36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6 in task-service has been cleanup successfully" Nov 8 00:09:02.801568 kubelet[2774]: I1108 00:09:02.801521 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:02.805238 containerd[1590]: time="2025-11-08T00:09:02.804940301Z" level=info msg="StopPodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\"" Nov 8 00:09:02.807126 containerd[1590]: time="2025-11-08T00:09:02.806879788Z" level=info msg="Ensure that sandbox e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b in task-service has been cleanup successfully" Nov 8 00:09:02.808652 kubelet[2774]: I1108 00:09:02.808533 2774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:02.809199 containerd[1590]: time="2025-11-08T00:09:02.809164955Z" level=info msg="StopPodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\"" Nov 8 00:09:02.811088 containerd[1590]: time="2025-11-08T00:09:02.810537919Z" level=info msg="Ensure that sandbox 3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b in task-service has been cleanup successfully" Nov 8 00:09:02.858671 containerd[1590]: time="2025-11-08T00:09:02.858601194Z" level=error msg="StopPodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" failed" error="failed to destroy network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.860884 kubelet[2774]: E1108 00:09:02.860185 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:02.860884 kubelet[2774]: E1108 00:09:02.860253 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04"} Nov 8 00:09:02.860884 kubelet[2774]: E1108 00:09:02.860307 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c488e78-d3ba-4197-ab37-75734ccb9129\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.860884 kubelet[2774]: E1108 00:09:02.860331 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c488e78-d3ba-4197-ab37-75734ccb9129\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:02.891425 containerd[1590]: time="2025-11-08T00:09:02.891079339Z" level=error msg="StopPodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" failed" error="failed to destroy network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.891632 kubelet[2774]: E1108 00:09:02.891579 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:02.891680 kubelet[2774]: E1108 00:09:02.891655 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e"} Nov 8 00:09:02.891719 kubelet[2774]: E1108 00:09:02.891688 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57f11a43-3690-45d9-8837-b8df56bb1a07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.891774 kubelet[2774]: E1108 00:09:02.891743 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57f11a43-3690-45d9-8837-b8df56bb1a07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:02.895128 containerd[1590]: time="2025-11-08T00:09:02.894919311Z" level=error msg="StopPodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" failed" error="failed to destroy network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.895703 kubelet[2774]: E1108 00:09:02.895480 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:02.895703 kubelet[2774]: E1108 00:09:02.895533 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c"} Nov 8 00:09:02.895703 kubelet[2774]: E1108 00:09:02.895571 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9506755c-a0f4-47f2-b269-7090f44df783\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.895703 kubelet[2774]: E1108 00:09:02.895593 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9506755c-a0f4-47f2-b269-7090f44df783\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tcqxl" podUID="9506755c-a0f4-47f2-b269-7090f44df783" Nov 8 00:09:02.907330 containerd[1590]: time="2025-11-08T00:09:02.907255471Z" level=error msg="StopPodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" failed" error="failed to destroy network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.907671 kubelet[2774]: E1108 00:09:02.907526 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:02.907671 kubelet[2774]: E1108 00:09:02.907597 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed"} Nov 8 00:09:02.907671 kubelet[2774]: E1108 00:09:02.907636 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ec2eb36-2470-4386-96a3-fe6dd8fc602f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.907671 kubelet[2774]: E1108 00:09:02.907662 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ec2eb36-2470-4386-96a3-fe6dd8fc602f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:02.913936 containerd[1590]: time="2025-11-08T00:09:02.913468611Z" level=error msg="StopPodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" failed" error="failed to destroy network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.914068 kubelet[2774]: E1108 00:09:02.913731 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:02.914068 kubelet[2774]: E1108 00:09:02.913780 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b"} Nov 8 00:09:02.914068 kubelet[2774]: E1108 00:09:02.913815 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.914068 kubelet[2774]: E1108 00:09:02.913836 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fsp2l" podUID="e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8" Nov 8 00:09:02.928167 containerd[1590]: time="2025-11-08T00:09:02.927883417Z" level=error msg="StopPodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" failed" error="failed to destroy network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.928312 kubelet[2774]: E1108 00:09:02.928173 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:02.928312 kubelet[2774]: E1108 00:09:02.928229 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b"} Nov 8 00:09:02.928312 kubelet[2774]: E1108 00:09:02.928265 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7626e06-780f-4455-a129-cde6e36c9bf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.928312 kubelet[2774]: E1108 00:09:02.928288 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7626e06-780f-4455-a129-cde6e36c9bf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68f9d864b5-mqg6n" podUID="c7626e06-780f-4455-a129-cde6e36c9bf2" Nov 8 00:09:02.930588 containerd[1590]: time="2025-11-08T00:09:02.930142384Z" level=error msg="StopPodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" failed" error="failed to destroy network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.930732 kubelet[2774]: E1108 00:09:02.930470 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:02.930732 kubelet[2774]: E1108 00:09:02.930532 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6"} Nov 8 00:09:02.930732 kubelet[2774]: E1108 00:09:02.930568 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffefce46-3638-4c95-bed3-200605f5f8d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.930732 kubelet[2774]: E1108 00:09:02.930592 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffefce46-3638-4c95-bed3-200605f5f8d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:02.932052 kubelet[2774]: E1108 00:09:02.931385 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:02.932052 kubelet[2774]: E1108 00:09:02.931426 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035"} Nov 8 00:09:02.932052 kubelet[2774]: E1108 00:09:02.931513 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72f7d776-6bd7-4d33-8b73-a5febd833bf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.932282 containerd[1590]: time="2025-11-08T00:09:02.930901587Z" level=error msg="StopPodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" failed" error="failed to destroy network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.932421 kubelet[2774]: E1108 00:09:02.932365 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72f7d776-6bd7-4d33-8b73-a5febd833bf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:02.937726 containerd[1590]: time="2025-11-08T00:09:02.937552808Z" level=error msg="StopPodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" failed" error="failed to destroy network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:02.938211 kubelet[2774]: E1108 00:09:02.938062 2774 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:02.938211 kubelet[2774]: E1108 00:09:02.938116 2774 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7"} Nov 8 00:09:02.938211 kubelet[2774]: E1108 00:09:02.938150 2774 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02c3f902-7bf4-4824-923c-48ba4e1e389c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:02.938211 kubelet[2774]: E1108 00:09:02.938181 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02c3f902-7bf4-4824-923c-48ba4e1e389c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:09:06.184044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341048950.mount: Deactivated successfully. Nov 8 00:09:06.214674 containerd[1590]: time="2025-11-08T00:09:06.214609412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:06.216682 containerd[1590]: time="2025-11-08T00:09:06.216638147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:09:06.219128 containerd[1590]: time="2025-11-08T00:09:06.217868155Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:06.227745 containerd[1590]: time="2025-11-08T00:09:06.227676145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:06.228706 containerd[1590]: time="2025-11-08T00:09:06.228572071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.489903769s" Nov 8 00:09:06.228706 containerd[1590]: time="2025-11-08T00:09:06.228610631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:09:06.248724 containerd[1590]: time="2025-11-08T00:09:06.248540292Z" level=info msg="CreateContainer within sandbox \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:09:06.266744 containerd[1590]: time="2025-11-08T00:09:06.266559139Z" level=info msg="CreateContainer within sandbox \"639976c15f0cd3c36a4e084c78eef143b04e32b130a50441b9e11e48a37e3985\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"119d6aba0c4371161ca238eab9f22a1be66cdea06f46ccbeaf150fe520ba27e5\"" Nov 8 00:09:06.269932 containerd[1590]: time="2025-11-08T00:09:06.268130870Z" level=info msg="StartContainer for \"119d6aba0c4371161ca238eab9f22a1be66cdea06f46ccbeaf150fe520ba27e5\"" Nov 8 00:09:06.339998 containerd[1590]: time="2025-11-08T00:09:06.339928417Z" level=info msg="StartContainer for \"119d6aba0c4371161ca238eab9f22a1be66cdea06f46ccbeaf150fe520ba27e5\" returns successfully" Nov 8 00:09:06.503475 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:09:06.503669 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:09:06.682979 containerd[1590]: time="2025-11-08T00:09:06.680929425Z" level=info msg="StopPodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\"" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.819 [INFO][4021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.819 [INFO][4021] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" iface="eth0" netns="/var/run/netns/cni-b5615e88-8147-7897-5cea-0882c9bcc20c" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.820 [INFO][4021] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" iface="eth0" netns="/var/run/netns/cni-b5615e88-8147-7897-5cea-0882c9bcc20c" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.821 [INFO][4021] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" iface="eth0" netns="/var/run/netns/cni-b5615e88-8147-7897-5cea-0882c9bcc20c" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.821 [INFO][4021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.821 [INFO][4021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.893 [INFO][4029] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.893 [INFO][4029] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.894 [INFO][4029] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.904 [WARNING][4029] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.905 [INFO][4029] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.907 [INFO][4029] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:06.912946 containerd[1590]: 2025-11-08 00:09:06.909 [INFO][4021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:06.914306 containerd[1590]: time="2025-11-08T00:09:06.914053311Z" level=info msg="TearDown network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" successfully" Nov 8 00:09:06.914306 containerd[1590]: time="2025-11-08T00:09:06.914104191Z" level=info msg="StopPodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" returns successfully" Nov 8 00:09:06.994692 kubelet[2774]: I1108 00:09:06.994647 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-ca-bundle\") pod \"c7626e06-780f-4455-a129-cde6e36c9bf2\" (UID: \"c7626e06-780f-4455-a129-cde6e36c9bf2\") " Nov 8 00:09:06.995556 kubelet[2774]: I1108 00:09:06.994708 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87gqc\" (UniqueName: \"kubernetes.io/projected/c7626e06-780f-4455-a129-cde6e36c9bf2-kube-api-access-87gqc\") pod \"c7626e06-780f-4455-a129-cde6e36c9bf2\" (UID: \"c7626e06-780f-4455-a129-cde6e36c9bf2\") " Nov 8 00:09:06.995556 kubelet[2774]: I1108 00:09:06.994738 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-backend-key-pair\") pod \"c7626e06-780f-4455-a129-cde6e36c9bf2\" (UID: \"c7626e06-780f-4455-a129-cde6e36c9bf2\") " Nov 8 00:09:07.005527 kubelet[2774]: I1108 00:09:07.003871 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c7626e06-780f-4455-a129-cde6e36c9bf2" (UID: "c7626e06-780f-4455-a129-cde6e36c9bf2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:09:07.007915 kubelet[2774]: I1108 00:09:07.007838 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c7626e06-780f-4455-a129-cde6e36c9bf2" (UID: "c7626e06-780f-4455-a129-cde6e36c9bf2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:09:07.009385 kubelet[2774]: I1108 00:09:07.009350 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7626e06-780f-4455-a129-cde6e36c9bf2-kube-api-access-87gqc" (OuterVolumeSpecName: "kube-api-access-87gqc") pod "c7626e06-780f-4455-a129-cde6e36c9bf2" (UID: "c7626e06-780f-4455-a129-cde6e36c9bf2"). InnerVolumeSpecName "kube-api-access-87gqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:09:07.095732 kubelet[2774]: I1108 00:09:07.095672 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-ca-bundle\") on node \"ci-4081-3-6-n-3f5a11d2fe\" DevicePath \"\"" Nov 8 00:09:07.095732 kubelet[2774]: I1108 00:09:07.095738 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-87gqc\" (UniqueName: \"kubernetes.io/projected/c7626e06-780f-4455-a129-cde6e36c9bf2-kube-api-access-87gqc\") on node \"ci-4081-3-6-n-3f5a11d2fe\" DevicePath \"\"" Nov 8 00:09:07.095922 kubelet[2774]: I1108 00:09:07.095761 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7626e06-780f-4455-a129-cde6e36c9bf2-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-3f5a11d2fe\" DevicePath \"\"" Nov 8 00:09:07.185942 systemd[1]: run-netns-cni\x2db5615e88\x2d8147\x2d7897\x2d5cea\x2d0882c9bcc20c.mount: Deactivated successfully. Nov 8 00:09:07.186736 systemd[1]: var-lib-kubelet-pods-c7626e06\x2d780f\x2d4455\x2da129\x2dcde6e36c9bf2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d87gqc.mount: Deactivated successfully. Nov 8 00:09:07.186878 systemd[1]: var-lib-kubelet-pods-c7626e06\x2d780f\x2d4455\x2da129\x2dcde6e36c9bf2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:09:07.837033 kubelet[2774]: I1108 00:09:07.836449 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:07.859518 kubelet[2774]: I1108 00:09:07.859406 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vw56v" podStartSLOduration=2.186501871 podStartE2EDuration="14.859376067s" podCreationTimestamp="2025-11-08 00:08:53 +0000 UTC" firstStartedPulling="2025-11-08 00:08:53.556753002 +0000 UTC m=+30.117695146" lastFinishedPulling="2025-11-08 00:09:06.229627118 +0000 UTC m=+42.790569342" observedRunningTime="2025-11-08 00:09:06.863739876 +0000 UTC m=+43.424682060" watchObservedRunningTime="2025-11-08 00:09:07.859376067 +0000 UTC m=+44.420318371" Nov 8 00:09:08.002820 kubelet[2774]: I1108 00:09:08.002463 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3005ab6c-6899-4c7c-9cae-4c79f44757c6-whisker-backend-key-pair\") pod \"whisker-579fd6c6df-bcwhq\" (UID: \"3005ab6c-6899-4c7c-9cae-4c79f44757c6\") " pod="calico-system/whisker-579fd6c6df-bcwhq" Nov 8 00:09:08.002820 kubelet[2774]: I1108 00:09:08.002777 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5shx\" (UniqueName: \"kubernetes.io/projected/3005ab6c-6899-4c7c-9cae-4c79f44757c6-kube-api-access-d5shx\") pod \"whisker-579fd6c6df-bcwhq\" (UID: \"3005ab6c-6899-4c7c-9cae-4c79f44757c6\") " pod="calico-system/whisker-579fd6c6df-bcwhq" Nov 8 00:09:08.002820 kubelet[2774]: I1108 00:09:08.003075 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3005ab6c-6899-4c7c-9cae-4c79f44757c6-whisker-ca-bundle\") pod \"whisker-579fd6c6df-bcwhq\" (UID: \"3005ab6c-6899-4c7c-9cae-4c79f44757c6\") " pod="calico-system/whisker-579fd6c6df-bcwhq" Nov 8 00:09:08.227570 containerd[1590]: time="2025-11-08T00:09:08.225917453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579fd6c6df-bcwhq,Uid:3005ab6c-6899-4c7c-9cae-4c79f44757c6,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:08.578897 systemd-networkd[1240]: calia327b9e8af8: Link UP Nov 8 00:09:08.579660 systemd-networkd[1240]: calia327b9e8af8: Gained carrier Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.410 [INFO][4142] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.443 [INFO][4142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0 whisker-579fd6c6df- calico-system 3005ab6c-6899-4c7c-9cae-4c79f44757c6 901 0 2025-11-08 00:09:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:579fd6c6df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe whisker-579fd6c6df-bcwhq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia327b9e8af8 [] [] }} ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.445 [INFO][4142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.499 [INFO][4154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" HandleID="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.499 [INFO][4154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" HandleID="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"whisker-579fd6c6df-bcwhq", "timestamp":"2025-11-08 00:09:08.499353701 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.499 [INFO][4154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.499 [INFO][4154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.499 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.513 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.520 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.530 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.534 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.539 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.539 [INFO][4154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.542 [INFO][4154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664 Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.551 [INFO][4154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.561 [INFO][4154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.129/26] block=192.168.105.128/26 handle="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.561 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.129/26] handle="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.561 [INFO][4154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:08.604994 containerd[1590]: 2025-11-08 00:09:08.562 [INFO][4154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.129/26] IPv6=[] ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" HandleID="k8s-pod-network.0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.605636 containerd[1590]: 2025-11-08 00:09:08.566 [INFO][4142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0", GenerateName:"whisker-579fd6c6df-", Namespace:"calico-system", SelfLink:"", UID:"3005ab6c-6899-4c7c-9cae-4c79f44757c6", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"579fd6c6df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"whisker-579fd6c6df-bcwhq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia327b9e8af8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:08.605636 containerd[1590]: 2025-11-08 00:09:08.566 [INFO][4142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.129/32] ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.605636 containerd[1590]: 2025-11-08 00:09:08.566 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia327b9e8af8 ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.605636 containerd[1590]: 2025-11-08 00:09:08.581 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.605636 containerd[1590]: 2025-11-08 00:09:08.582 [INFO][4142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0", GenerateName:"whisker-579fd6c6df-", Namespace:"calico-system", SelfLink:"", UID:"3005ab6c-6899-4c7c-9cae-4c79f44757c6", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"579fd6c6df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664", Pod:"whisker-579fd6c6df-bcwhq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia327b9e8af8", MAC:"6a:ab:78:18:b1:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:08.605636 containerd[1590]: 2025-11-08 00:09:08.599 [INFO][4142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664" Namespace="calico-system" Pod="whisker-579fd6c6df-bcwhq" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--579fd6c6df--bcwhq-eth0" Nov 8 00:09:08.628850 containerd[1590]: time="2025-11-08T00:09:08.628541919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:08.628850 containerd[1590]: time="2025-11-08T00:09:08.628599119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:08.628850 containerd[1590]: time="2025-11-08T00:09:08.628610679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:08.628850 containerd[1590]: time="2025-11-08T00:09:08.628740080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:08.684721 containerd[1590]: time="2025-11-08T00:09:08.684682133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579fd6c6df-bcwhq,Uid:3005ab6c-6899-4c7c-9cae-4c79f44757c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ff44e08f22da58da4384976806b1ccff751c7c41c0cf9ca2fc57be4b71aa664\"" Nov 8 00:09:08.687486 containerd[1590]: time="2025-11-08T00:09:08.687199675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:09:09.024044 containerd[1590]: time="2025-11-08T00:09:09.023992861Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:09.025256 containerd[1590]: time="2025-11-08T00:09:09.025215632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:09:09.026048 containerd[1590]: time="2025-11-08T00:09:09.025325593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:09:09.026510 kubelet[2774]: E1108 00:09:09.026285 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:09.026510 kubelet[2774]: E1108 00:09:09.026338 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:09.032293 kubelet[2774]: E1108 00:09:09.032116 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b49b9874aa24b60b25840c8ea795204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:09.036424 containerd[1590]: time="2025-11-08T00:09:09.036264899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:09:09.387932 containerd[1590]: time="2025-11-08T00:09:09.386100111Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:09.389978 containerd[1590]: time="2025-11-08T00:09:09.389343822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:09:09.389978 containerd[1590]: time="2025-11-08T00:09:09.389429063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:09.390122 kubelet[2774]: E1108 00:09:09.389988 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:09.390122 kubelet[2774]: E1108 00:09:09.390040 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:09.390187 kubelet[2774]: E1108 00:09:09.390153 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:09.391588 kubelet[2774]: E1108 00:09:09.391525 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:09:09.579507 kubelet[2774]: I1108 00:09:09.579096 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7626e06-780f-4455-a129-cde6e36c9bf2" path="/var/lib/kubelet/pods/c7626e06-780f-4455-a129-cde6e36c9bf2/volumes" Nov 8 00:09:09.844979 kubelet[2774]: E1108 00:09:09.844761 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:09:10.270270 systemd-networkd[1240]: calia327b9e8af8: Gained IPv6LL Nov 8 00:09:11.068945 kubelet[2774]: I1108 00:09:11.068839 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:14.075692 kubelet[2774]: I1108 00:09:14.075322 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:14.572314 containerd[1590]: time="2025-11-08T00:09:14.572017029Z" level=info msg="StopPodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\"" Nov 8 00:09:14.572750 containerd[1590]: time="2025-11-08T00:09:14.572631037Z" level=info msg="StopPodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\"" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.641 [INFO][4380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.642 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" iface="eth0" netns="/var/run/netns/cni-6bb9d555-15c9-5c06-89c0-ec1c875c04af" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.643 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" iface="eth0" netns="/var/run/netns/cni-6bb9d555-15c9-5c06-89c0-ec1c875c04af" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.643 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" iface="eth0" netns="/var/run/netns/cni-6bb9d555-15c9-5c06-89c0-ec1c875c04af" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.643 [INFO][4380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.643 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.683 [INFO][4393] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.684 [INFO][4393] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.684 [INFO][4393] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.697 [WARNING][4393] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.697 [INFO][4393] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.702 [INFO][4393] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:14.715229 containerd[1590]: 2025-11-08 00:09:14.706 [INFO][4380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:14.720790 containerd[1590]: time="2025-11-08T00:09:14.717094536Z" level=info msg="TearDown network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" successfully" Nov 8 00:09:14.720790 containerd[1590]: time="2025-11-08T00:09:14.717144817Z" level=info msg="StopPodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" returns successfully" Nov 8 00:09:14.723828 containerd[1590]: time="2025-11-08T00:09:14.722526569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-mj8jx,Uid:ffefce46-3638-4c95-bed3-200605f5f8d9,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:09:14.726528 systemd[1]: run-netns-cni\x2d6bb9d555\x2d15c9\x2d5c06\x2d89c0\x2dec1c875c04af.mount: Deactivated successfully. Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.659 [INFO][4381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.659 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" iface="eth0" netns="/var/run/netns/cni-258c8510-b03c-617d-7861-617d6e7da678" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.659 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" iface="eth0" netns="/var/run/netns/cni-258c8510-b03c-617d-7861-617d6e7da678" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.665 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" iface="eth0" netns="/var/run/netns/cni-258c8510-b03c-617d-7861-617d6e7da678" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.665 [INFO][4381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.665 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.693 [INFO][4399] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.694 [INFO][4399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.702 [INFO][4399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.725 [WARNING][4399] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.725 [INFO][4399] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.733 [INFO][4399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:14.750192 containerd[1590]: 2025-11-08 00:09:14.743 [INFO][4381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:14.750192 containerd[1590]: time="2025-11-08T00:09:14.750054259Z" level=info msg="TearDown network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" successfully" Nov 8 00:09:14.750192 containerd[1590]: time="2025-11-08T00:09:14.750086699Z" level=info msg="StopPodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" returns successfully" Nov 8 00:09:14.752459 containerd[1590]: time="2025-11-08T00:09:14.751296876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf6dfcdd-hptwz,Uid:02c3f902-7bf4-4824-923c-48ba4e1e389c,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:09:14.763123 systemd[1]: run-netns-cni\x2d258c8510\x2db03c\x2d617d\x2d7861\x2d617d6e7da678.mount: Deactivated successfully. Nov 8 00:09:14.903980 kernel: bpftool[4463]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:09:15.086439 systemd-networkd[1240]: cali1f20e48556f: Link UP Nov 8 00:09:15.086643 systemd-networkd[1240]: cali1f20e48556f: Gained carrier Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.824 [INFO][4417] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.859 [INFO][4417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0 calico-apiserver-78c5874598- calico-apiserver ffefce46-3638-4c95-bed3-200605f5f8d9 942 0 2025-11-08 00:08:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78c5874598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe calico-apiserver-78c5874598-mj8jx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f20e48556f [] [] }} ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.859 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.984 [INFO][4450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" HandleID="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.984 [INFO][4450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" HandleID="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003be520), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"calico-apiserver-78c5874598-mj8jx", "timestamp":"2025-11-08 00:09:14.984502886 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.984 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.984 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:14.984 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.007 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.018 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.030 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.040 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.047 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.047 [INFO][4450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.050 [INFO][4450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399 Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.063 [INFO][4450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.072 [INFO][4450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.130/26] block=192.168.105.128/26 handle="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.072 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.130/26] handle="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.072 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:15.112902 containerd[1590]: 2025-11-08 00:09:15.072 [INFO][4450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.130/26] IPv6=[] ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" HandleID="k8s-pod-network.2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.113683 containerd[1590]: 2025-11-08 00:09:15.081 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffefce46-3638-4c95-bed3-200605f5f8d9", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"calico-apiserver-78c5874598-mj8jx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f20e48556f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:15.113683 containerd[1590]: 2025-11-08 00:09:15.081 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.130/32] ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.113683 containerd[1590]: 2025-11-08 00:09:15.081 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f20e48556f ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.113683 containerd[1590]: 2025-11-08 00:09:15.082 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.113683 containerd[1590]: 2025-11-08 00:09:15.083 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffefce46-3638-4c95-bed3-200605f5f8d9", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399", Pod:"calico-apiserver-78c5874598-mj8jx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f20e48556f", MAC:"6a:58:8a:68:08:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:15.113683 containerd[1590]: 2025-11-08 00:09:15.106 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-mj8jx" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:15.163288 containerd[1590]: time="2025-11-08T00:09:15.158031084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:15.163288 containerd[1590]: time="2025-11-08T00:09:15.158166886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:15.163288 containerd[1590]: time="2025-11-08T00:09:15.158185366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:15.163288 containerd[1590]: time="2025-11-08T00:09:15.158515171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:15.228885 systemd-networkd[1240]: cali43a17ef558a: Link UP Nov 8 00:09:15.233855 systemd-networkd[1240]: cali43a17ef558a: Gained carrier Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:14.943 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0 calico-apiserver-77bf6dfcdd- calico-apiserver 02c3f902-7bf4-4824-923c-48ba4e1e389c 943 0 2025-11-08 00:08:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bf6dfcdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe calico-apiserver-77bf6dfcdd-hptwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43a17ef558a [] [] }} ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:14.945 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.054 [INFO][4471] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" HandleID="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.058 [INFO][4471] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" HandleID="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"calico-apiserver-77bf6dfcdd-hptwz", "timestamp":"2025-11-08 00:09:15.054673825 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.059 [INFO][4471] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.073 [INFO][4471] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.073 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.106 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.124 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.138 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.146 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.156 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.158 [INFO][4471] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.168 [INFO][4471] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9 Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.179 [INFO][4471] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.189 [INFO][4471] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.131/26] block=192.168.105.128/26 handle="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.189 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.131/26] handle="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.189 [INFO][4471] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:15.273527 containerd[1590]: 2025-11-08 00:09:15.190 [INFO][4471] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.131/26] IPv6=[] ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" HandleID="k8s-pod-network.cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.274196 containerd[1590]: 2025-11-08 00:09:15.213 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0", GenerateName:"calico-apiserver-77bf6dfcdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02c3f902-7bf4-4824-923c-48ba4e1e389c", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf6dfcdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"calico-apiserver-77bf6dfcdd-hptwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43a17ef558a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:15.274196 containerd[1590]: 2025-11-08 00:09:15.213 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.131/32] ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.274196 containerd[1590]: 2025-11-08 00:09:15.213 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43a17ef558a ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.274196 containerd[1590]: 2025-11-08 00:09:15.233 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.274196 containerd[1590]: 2025-11-08 00:09:15.234 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0", GenerateName:"calico-apiserver-77bf6dfcdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02c3f902-7bf4-4824-923c-48ba4e1e389c", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf6dfcdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9", Pod:"calico-apiserver-77bf6dfcdd-hptwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43a17ef558a", MAC:"7e:df:e7:94:e1:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:15.274196 containerd[1590]: 2025-11-08 00:09:15.263 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9" Namespace="calico-apiserver" Pod="calico-apiserver-77bf6dfcdd-hptwz" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:15.284904 containerd[1590]: time="2025-11-08T00:09:15.284646631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-mj8jx,Uid:ffefce46-3638-4c95-bed3-200605f5f8d9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399\"" Nov 8 00:09:15.308108 containerd[1590]: time="2025-11-08T00:09:15.307653796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:15.308108 containerd[1590]: time="2025-11-08T00:09:15.307725837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:15.308108 containerd[1590]: time="2025-11-08T00:09:15.307743037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:15.308108 containerd[1590]: time="2025-11-08T00:09:15.307831678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:15.317505 containerd[1590]: time="2025-11-08T00:09:15.315664789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:15.400849 systemd-networkd[1240]: vxlan.calico: Link UP Nov 8 00:09:15.400864 systemd-networkd[1240]: vxlan.calico: Gained carrier Nov 8 00:09:15.409094 containerd[1590]: time="2025-11-08T00:09:15.408923065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf6dfcdd-hptwz,Uid:02c3f902-7bf4-4824-923c-48ba4e1e389c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9\"" Nov 8 00:09:15.719871 containerd[1590]: time="2025-11-08T00:09:15.719737971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:15.723018 containerd[1590]: time="2025-11-08T00:09:15.722784934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:15.723018 containerd[1590]: time="2025-11-08T00:09:15.722911016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:15.725414 kubelet[2774]: E1108 00:09:15.725362 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:15.726584 kubelet[2774]: E1108 00:09:15.725859 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:15.727108 containerd[1590]: time="2025-11-08T00:09:15.726980953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:15.741566 kubelet[2774]: E1108 00:09:15.741508 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhdmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:15.743011 kubelet[2774]: E1108 00:09:15.742941 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:15.867696 kubelet[2774]: E1108 00:09:15.867129 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:16.106369 containerd[1590]: time="2025-11-08T00:09:16.106209056Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:16.107872 containerd[1590]: time="2025-11-08T00:09:16.107760039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:16.108049 containerd[1590]: time="2025-11-08T00:09:16.107877041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:16.108230 kubelet[2774]: E1108 00:09:16.108163 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:16.108315 kubelet[2774]: E1108 00:09:16.108241 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:16.108477 kubelet[2774]: E1108 00:09:16.108380 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf6dfcdd-hptwz_calico-apiserver(02c3f902-7bf4-4824-923c-48ba4e1e389c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:16.110568 kubelet[2774]: E1108 00:09:16.110477 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:09:16.479992 systemd-networkd[1240]: cali1f20e48556f: Gained IPv6LL Nov 8 00:09:16.571627 containerd[1590]: time="2025-11-08T00:09:16.571387491Z" level=info msg="StopPodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\"" Nov 8 00:09:16.571984 containerd[1590]: time="2025-11-08T00:09:16.571486852Z" level=info msg="StopPodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\"" Nov 8 00:09:16.573463 containerd[1590]: time="2025-11-08T00:09:16.571567373Z" level=info msg="StopPodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\"" Nov 8 00:09:16.606260 systemd-networkd[1240]: cali43a17ef558a: Gained IPv6LL Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.689 [INFO][4705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.691 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" iface="eth0" netns="/var/run/netns/cni-e6c8f285-1232-49cd-fb21-ab1d79691bca" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.694 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" iface="eth0" netns="/var/run/netns/cni-e6c8f285-1232-49cd-fb21-ab1d79691bca" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.700 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" iface="eth0" netns="/var/run/netns/cni-e6c8f285-1232-49cd-fb21-ab1d79691bca" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.701 [INFO][4705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.701 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.808 [INFO][4727] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.808 [INFO][4727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.808 [INFO][4727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.820 [WARNING][4727] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.820 [INFO][4727] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.824 [INFO][4727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:16.830782 containerd[1590]: 2025-11-08 00:09:16.828 [INFO][4705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:16.830782 containerd[1590]: time="2025-11-08T00:09:16.830245116Z" level=info msg="TearDown network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" successfully" Nov 8 00:09:16.830782 containerd[1590]: time="2025-11-08T00:09:16.830296997Z" level=info msg="StopPodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" returns successfully" Nov 8 00:09:16.838324 systemd[1]: run-netns-cni\x2de6c8f285\x2d1232\x2d49cd\x2dfb21\x2dab1d79691bca.mount: Deactivated successfully. Nov 8 00:09:16.851235 containerd[1590]: time="2025-11-08T00:09:16.851194346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tcqxl,Uid:9506755c-a0f4-47f2-b269-7090f44df783,Namespace:kube-system,Attempt:1,}" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.694 [INFO][4710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.694 [INFO][4710] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" iface="eth0" netns="/var/run/netns/cni-b4f6a5ea-434a-9eff-7d90-b4a24136e754" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.703 [INFO][4710] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" iface="eth0" netns="/var/run/netns/cni-b4f6a5ea-434a-9eff-7d90-b4a24136e754" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.705 [INFO][4710] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" iface="eth0" netns="/var/run/netns/cni-b4f6a5ea-434a-9eff-7d90-b4a24136e754" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.705 [INFO][4710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.705 [INFO][4710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.811 [INFO][4729] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.811 [INFO][4729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.824 [INFO][4729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.842 [WARNING][4729] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.843 [INFO][4729] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.846 [INFO][4729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:16.854160 containerd[1590]: 2025-11-08 00:09:16.852 [INFO][4710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:16.855419 containerd[1590]: time="2025-11-08T00:09:16.854916121Z" level=info msg="TearDown network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" successfully" Nov 8 00:09:16.855704 containerd[1590]: time="2025-11-08T00:09:16.855661612Z" level=info msg="StopPodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" returns successfully" Nov 8 00:09:16.860656 containerd[1590]: time="2025-11-08T00:09:16.860292801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-gtq72,Uid:7ec2eb36-2470-4386-96a3-fe6dd8fc602f,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:09:16.861274 systemd[1]: run-netns-cni\x2db4f6a5ea\x2d434a\x2d9eff\x2d7d90\x2db4a24136e754.mount: Deactivated successfully. Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.710 [INFO][4709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.710 [INFO][4709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" iface="eth0" netns="/var/run/netns/cni-7657eac2-1625-90d3-e633-fc782329f5e4" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.711 [INFO][4709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" iface="eth0" netns="/var/run/netns/cni-7657eac2-1625-90d3-e633-fc782329f5e4" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.722 [INFO][4709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" iface="eth0" netns="/var/run/netns/cni-7657eac2-1625-90d3-e633-fc782329f5e4" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.722 [INFO][4709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.722 [INFO][4709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.822 [INFO][4732] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.823 [INFO][4732] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.846 [INFO][4732] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.864 [WARNING][4732] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.867 [INFO][4732] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.870 [INFO][4732] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:16.878203 containerd[1590]: 2025-11-08 00:09:16.873 [INFO][4709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:16.883095 containerd[1590]: time="2025-11-08T00:09:16.879567525Z" level=info msg="TearDown network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" successfully" Nov 8 00:09:16.883095 containerd[1590]: time="2025-11-08T00:09:16.879621006Z" level=info msg="StopPodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" returns successfully" Nov 8 00:09:16.885352 systemd[1]: run-netns-cni\x2d7657eac2\x2d1625\x2d90d3\x2de633\x2dfc782329f5e4.mount: Deactivated successfully. Nov 8 00:09:16.898008 containerd[1590]: time="2025-11-08T00:09:16.897900396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b5cccc46-bcg68,Uid:5c488e78-d3ba-4197-ab37-75734ccb9129,Namespace:calico-system,Attempt:1,}" Nov 8 00:09:16.899199 kubelet[2774]: E1108 00:09:16.899161 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:09:16.900543 kubelet[2774]: E1108 00:09:16.899417 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:17.157438 systemd-networkd[1240]: calie9d8edad833: Link UP Nov 8 00:09:17.157666 systemd-networkd[1240]: calie9d8edad833: Gained carrier Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:16.999 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0 coredns-668d6bf9bc- kube-system 9506755c-a0f4-47f2-b269-7090f44df783 968 0 2025-11-08 00:08:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe coredns-668d6bf9bc-tcqxl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9d8edad833 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:16.999 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.070 [INFO][4785] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" HandleID="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.070 [INFO][4785] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" HandleID="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"coredns-668d6bf9bc-tcqxl", "timestamp":"2025-11-08 00:09:17.070268028 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.070 [INFO][4785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.070 [INFO][4785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.070 [INFO][4785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.084 [INFO][4785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.100 [INFO][4785] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.110 [INFO][4785] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.114 [INFO][4785] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.118 [INFO][4785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.118 [INFO][4785] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.121 [INFO][4785] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.128 [INFO][4785] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.144 [INFO][4785] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.132/26] block=192.168.105.128/26 handle="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.144 [INFO][4785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.132/26] handle="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.144 [INFO][4785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:17.182515 containerd[1590]: 2025-11-08 00:09:17.144 [INFO][4785] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.132/26] IPv6=[] ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" HandleID="k8s-pod-network.95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.183946 containerd[1590]: 2025-11-08 00:09:17.152 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9506755c-a0f4-47f2-b269-7090f44df783", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"coredns-668d6bf9bc-tcqxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9d8edad833", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:17.183946 containerd[1590]: 2025-11-08 00:09:17.152 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.132/32] ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.183946 containerd[1590]: 2025-11-08 00:09:17.152 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9d8edad833 ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.183946 containerd[1590]: 2025-11-08 00:09:17.159 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.183946 containerd[1590]: 2025-11-08 00:09:17.161 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9506755c-a0f4-47f2-b269-7090f44df783", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc", Pod:"coredns-668d6bf9bc-tcqxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9d8edad833", MAC:"2a:52:2b:e1:30:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:17.183946 containerd[1590]: 2025-11-08 00:09:17.175 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc" Namespace="kube-system" Pod="coredns-668d6bf9bc-tcqxl" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:17.210799 containerd[1590]: time="2025-11-08T00:09:17.210450631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:17.210799 containerd[1590]: time="2025-11-08T00:09:17.210510712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:17.210799 containerd[1590]: time="2025-11-08T00:09:17.210537992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:17.210799 containerd[1590]: time="2025-11-08T00:09:17.210623193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:17.278305 systemd-networkd[1240]: cali583f5271cb9: Link UP Nov 8 00:09:17.278727 systemd-networkd[1240]: cali583f5271cb9: Gained carrier Nov 8 00:09:17.294029 containerd[1590]: time="2025-11-08T00:09:17.292678699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tcqxl,Uid:9506755c-a0f4-47f2-b269-7090f44df783,Namespace:kube-system,Attempt:1,} returns sandbox id \"95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc\"" Nov 8 00:09:17.298227 containerd[1590]: time="2025-11-08T00:09:17.298174344Z" level=info msg="CreateContainer within sandbox \"95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:09:17.310103 systemd-networkd[1240]: vxlan.calico: Gained IPv6LL Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.038 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0 calico-kube-controllers-54b5cccc46- calico-system 5c488e78-d3ba-4197-ab37-75734ccb9129 970 0 2025-11-08 00:08:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54b5cccc46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe calico-kube-controllers-54b5cccc46-bcg68 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali583f5271cb9 [] [] }} ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.038 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.106 [INFO][4794] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" HandleID="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.107 [INFO][4794] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" HandleID="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"calico-kube-controllers-54b5cccc46-bcg68", "timestamp":"2025-11-08 00:09:17.106765991 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.107 [INFO][4794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.144 [INFO][4794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.147 [INFO][4794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.186 [INFO][4794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.200 [INFO][4794] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.213 [INFO][4794] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.226 [INFO][4794] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.234 [INFO][4794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.234 [INFO][4794] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.240 [INFO][4794] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40 Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.254 [INFO][4794] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.264 [INFO][4794] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.133/26] block=192.168.105.128/26 handle="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.264 [INFO][4794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.133/26] handle="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.264 [INFO][4794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:17.322049 containerd[1590]: 2025-11-08 00:09:17.265 [INFO][4794] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.133/26] IPv6=[] ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" HandleID="k8s-pod-network.a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.322778 containerd[1590]: 2025-11-08 00:09:17.273 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0", GenerateName:"calico-kube-controllers-54b5cccc46-", Namespace:"calico-system", SelfLink:"", UID:"5c488e78-d3ba-4197-ab37-75734ccb9129", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b5cccc46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"calico-kube-controllers-54b5cccc46-bcg68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali583f5271cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:17.322778 containerd[1590]: 2025-11-08 00:09:17.273 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.133/32] ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.322778 containerd[1590]: 2025-11-08 00:09:17.273 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali583f5271cb9 ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.322778 containerd[1590]: 2025-11-08 00:09:17.278 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.322778 containerd[1590]: 2025-11-08 00:09:17.279 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0", GenerateName:"calico-kube-controllers-54b5cccc46-", Namespace:"calico-system", SelfLink:"", UID:"5c488e78-d3ba-4197-ab37-75734ccb9129", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b5cccc46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40", Pod:"calico-kube-controllers-54b5cccc46-bcg68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali583f5271cb9", MAC:"ea:1a:10:d6:7c:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:17.322778 containerd[1590]: 2025-11-08 00:09:17.299 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40" Namespace="calico-system" Pod="calico-kube-controllers-54b5cccc46-bcg68" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:17.360729 containerd[1590]: time="2025-11-08T00:09:17.360676028Z" level=info msg="CreateContainer within sandbox \"95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ead45b1787a965c7b2e832169fb43629d0fe262a400c77f042b5f4f2f3615e7b\"" Nov 8 00:09:17.362479 containerd[1590]: time="2025-11-08T00:09:17.361791765Z" level=info msg="StartContainer for \"ead45b1787a965c7b2e832169fb43629d0fe262a400c77f042b5f4f2f3615e7b\"" Nov 8 00:09:17.364212 containerd[1590]: time="2025-11-08T00:09:17.362423175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:17.364212 containerd[1590]: time="2025-11-08T00:09:17.362605618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:17.364212 containerd[1590]: time="2025-11-08T00:09:17.362845101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:17.365490 containerd[1590]: time="2025-11-08T00:09:17.365388941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:17.374396 systemd-networkd[1240]: calibe0a6c27343: Link UP Nov 8 00:09:17.384676 systemd-networkd[1240]: calibe0a6c27343: Gained carrier Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.045 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0 calico-apiserver-78c5874598- calico-apiserver 7ec2eb36-2470-4386-96a3-fe6dd8fc602f 969 0 2025-11-08 00:08:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78c5874598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe calico-apiserver-78c5874598-gtq72 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe0a6c27343 [] [] }} ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.045 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.107 [INFO][4793] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" HandleID="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.108 [INFO][4793] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" HandleID="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bc00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"calico-apiserver-78c5874598-gtq72", "timestamp":"2025-11-08 00:09:17.10798605 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.109 [INFO][4793] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.264 [INFO][4793] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.264 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.286 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.309 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.319 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.323 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.329 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.329 [INFO][4793] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.338 [INFO][4793] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3 Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.346 [INFO][4793] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.357 [INFO][4793] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.134/26] block=192.168.105.128/26 handle="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.357 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.134/26] handle="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.357 [INFO][4793] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:17.418251 containerd[1590]: 2025-11-08 00:09:17.357 [INFO][4793] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.134/26] IPv6=[] ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" HandleID="k8s-pod-network.4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.421825 containerd[1590]: 2025-11-08 00:09:17.364 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec2eb36-2470-4386-96a3-fe6dd8fc602f", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"calico-apiserver-78c5874598-gtq72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe0a6c27343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:17.421825 containerd[1590]: 2025-11-08 00:09:17.364 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.134/32] ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.421825 containerd[1590]: 2025-11-08 00:09:17.364 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe0a6c27343 ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.421825 containerd[1590]: 2025-11-08 00:09:17.391 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.421825 containerd[1590]: 2025-11-08 00:09:17.393 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec2eb36-2470-4386-96a3-fe6dd8fc602f", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3", Pod:"calico-apiserver-78c5874598-gtq72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe0a6c27343", MAC:"be:a8:90:c2:46:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:17.421825 containerd[1590]: 2025-11-08 00:09:17.411 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3" Namespace="calico-apiserver" Pod="calico-apiserver-78c5874598-gtq72" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:17.449707 containerd[1590]: time="2025-11-08T00:09:17.448145417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:17.449707 containerd[1590]: time="2025-11-08T00:09:17.448217818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:17.449707 containerd[1590]: time="2025-11-08T00:09:17.448229698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:17.449707 containerd[1590]: time="2025-11-08T00:09:17.448624424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:17.499859 containerd[1590]: time="2025-11-08T00:09:17.499808054Z" level=info msg="StartContainer for \"ead45b1787a965c7b2e832169fb43629d0fe262a400c77f042b5f4f2f3615e7b\" returns successfully" Nov 8 00:09:17.525223 containerd[1590]: time="2025-11-08T00:09:17.525115364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b5cccc46-bcg68,Uid:5c488e78-d3ba-4197-ab37-75734ccb9129,Namespace:calico-system,Attempt:1,} returns sandbox id \"a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40\"" Nov 8 00:09:17.533054 containerd[1590]: time="2025-11-08T00:09:17.530069721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:09:17.575989 containerd[1590]: time="2025-11-08T00:09:17.575054015Z" level=info msg="StopPodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\"" Nov 8 00:09:17.577979 containerd[1590]: time="2025-11-08T00:09:17.576234913Z" level=info msg="StopPodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\"" Nov 8 00:09:17.612530 containerd[1590]: time="2025-11-08T00:09:17.612042945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5874598-gtq72,Uid:7ec2eb36-2470-4386-96a3-fe6dd8fc602f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3\"" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.694 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.696 [INFO][5020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" iface="eth0" netns="/var/run/netns/cni-360c7163-74e9-1c14-ffd8-bdd8277f540c" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.698 [INFO][5020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" iface="eth0" netns="/var/run/netns/cni-360c7163-74e9-1c14-ffd8-bdd8277f540c" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.698 [INFO][5020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" iface="eth0" netns="/var/run/netns/cni-360c7163-74e9-1c14-ffd8-bdd8277f540c" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.698 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.698 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.732 [INFO][5033] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.732 [INFO][5033] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.733 [INFO][5033] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.743 [WARNING][5033] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.745 [INFO][5033] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.748 [INFO][5033] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:17.755135 containerd[1590]: 2025-11-08 00:09:17.751 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:17.756518 containerd[1590]: time="2025-11-08T00:09:17.756349331Z" level=info msg="TearDown network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" successfully" Nov 8 00:09:17.756518 containerd[1590]: time="2025-11-08T00:09:17.756385452Z" level=info msg="StopPodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" returns successfully" Nov 8 00:09:17.757298 containerd[1590]: time="2025-11-08T00:09:17.757271385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l7kch,Uid:72f7d776-6bd7-4d33-8b73-a5febd833bf0,Namespace:calico-system,Attempt:1,}" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.722 [INFO][5019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.722 [INFO][5019] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" iface="eth0" netns="/var/run/netns/cni-a3e035b1-7785-db91-2f79-5ddb832b1669" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.722 [INFO][5019] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" iface="eth0" netns="/var/run/netns/cni-a3e035b1-7785-db91-2f79-5ddb832b1669" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.723 [INFO][5019] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" iface="eth0" netns="/var/run/netns/cni-a3e035b1-7785-db91-2f79-5ddb832b1669" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.723 [INFO][5019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.723 [INFO][5019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.779 [INFO][5039] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.780 [INFO][5039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.780 [INFO][5039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.802 [WARNING][5039] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.803 [INFO][5039] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.807 [INFO][5039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:17.817367 containerd[1590]: 2025-11-08 00:09:17.812 [INFO][5019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:17.818265 containerd[1590]: time="2025-11-08T00:09:17.818065443Z" level=info msg="TearDown network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" successfully" Nov 8 00:09:17.818265 containerd[1590]: time="2025-11-08T00:09:17.818104643Z" level=info msg="StopPodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" returns successfully" Nov 8 00:09:17.820069 containerd[1590]: time="2025-11-08T00:09:17.820034713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6n7x9,Uid:57f11a43-3690-45d9-8837-b8df56bb1a07,Namespace:calico-system,Attempt:1,}" Nov 8 00:09:17.862699 systemd[1]: run-netns-cni\x2da3e035b1\x2d7785\x2ddb91\x2d2f79\x2d5ddb832b1669.mount: Deactivated successfully. Nov 8 00:09:17.862842 systemd[1]: run-netns-cni\x2d360c7163\x2d74e9\x2d1c14\x2dffd8\x2dbdd8277f540c.mount: Deactivated successfully. Nov 8 00:09:17.875052 containerd[1590]: time="2025-11-08T00:09:17.874841519Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:17.878163 containerd[1590]: time="2025-11-08T00:09:17.876422863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:09:17.878163 containerd[1590]: time="2025-11-08T00:09:17.876546305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:17.878163 containerd[1590]: time="2025-11-08T00:09:17.877050273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:17.878289 kubelet[2774]: E1108 00:09:17.876726 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:17.878289 kubelet[2774]: E1108 00:09:17.876775 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:17.883357 kubelet[2774]: E1108 00:09:17.882897 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pghwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b5cccc46-bcg68_calico-system(5c488e78-d3ba-4197-ab37-75734ccb9129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:17.885254 kubelet[2774]: E1108 00:09:17.884947 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:17.897923 kubelet[2774]: E1108 00:09:17.897872 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:17.938686 kubelet[2774]: I1108 00:09:17.938211 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tcqxl" podStartSLOduration=47.938187416 podStartE2EDuration="47.938187416s" podCreationTimestamp="2025-11-08 00:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:09:17.938099054 +0000 UTC m=+54.499041238" watchObservedRunningTime="2025-11-08 00:09:17.938187416 +0000 UTC m=+54.499129600" Nov 8 00:09:18.058549 systemd-networkd[1240]: califfeabce95f6: Link UP Nov 8 00:09:18.060113 systemd-networkd[1240]: califfeabce95f6: Gained carrier Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.897 [INFO][5051] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0 goldmane-666569f655- calico-system 72f7d776-6bd7-4d33-8b73-a5febd833bf0 998 0 2025-11-08 00:08:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe goldmane-666569f655-l7kch eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califfeabce95f6 [] [] }} ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.897 [INFO][5051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.973 [INFO][5071] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" HandleID="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.974 [INFO][5071] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" HandleID="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"goldmane-666569f655-l7kch", "timestamp":"2025-11-08 00:09:17.973909127 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.974 [INFO][5071] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.974 [INFO][5071] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.974 [INFO][5071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:17.997 [INFO][5071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.005 [INFO][5071] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.014 [INFO][5071] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.017 [INFO][5071] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.024 [INFO][5071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.024 [INFO][5071] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.027 [INFO][5071] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.035 [INFO][5071] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.046 [INFO][5071] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.135/26] block=192.168.105.128/26 handle="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.047 [INFO][5071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.135/26] handle="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.047 [INFO][5071] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:18.087710 containerd[1590]: 2025-11-08 00:09:18.047 [INFO][5071] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.135/26] IPv6=[] ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" HandleID="k8s-pod-network.00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.089541 containerd[1590]: 2025-11-08 00:09:18.050 [INFO][5051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"72f7d776-6bd7-4d33-8b73-a5febd833bf0", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"goldmane-666569f655-l7kch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfeabce95f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:18.089541 containerd[1590]: 2025-11-08 00:09:18.050 [INFO][5051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.135/32] ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.089541 containerd[1590]: 2025-11-08 00:09:18.050 [INFO][5051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfeabce95f6 ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.089541 containerd[1590]: 2025-11-08 00:09:18.061 [INFO][5051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.089541 containerd[1590]: 2025-11-08 00:09:18.062 [INFO][5051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"72f7d776-6bd7-4d33-8b73-a5febd833bf0", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a", Pod:"goldmane-666569f655-l7kch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfeabce95f6", MAC:"fa:a5:3f:12:ab:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:18.089541 containerd[1590]: 2025-11-08 00:09:18.084 [INFO][5051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a" Namespace="calico-system" Pod="goldmane-666569f655-l7kch" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:18.121182 containerd[1590]: time="2025-11-08T00:09:18.119586888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:18.121552 containerd[1590]: time="2025-11-08T00:09:18.119768011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:18.121552 containerd[1590]: time="2025-11-08T00:09:18.119804211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:18.121552 containerd[1590]: time="2025-11-08T00:09:18.119906333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:18.180997 systemd-networkd[1240]: cali41265c6d471: Link UP Nov 8 00:09:18.181425 systemd-networkd[1240]: cali41265c6d471: Gained carrier Nov 8 00:09:18.219944 containerd[1590]: time="2025-11-08T00:09:18.219908098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l7kch,Uid:72f7d776-6bd7-4d33-8b73-a5febd833bf0,Namespace:calico-system,Attempt:1,} returns sandbox id \"00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a\"" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:17.899 [INFO][5057] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0 csi-node-driver- calico-system 57f11a43-3690-45d9-8837-b8df56bb1a07 999 0 2025-11-08 00:08:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe csi-node-driver-6n7x9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali41265c6d471 [] [] }} ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:17.899 [INFO][5057] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:17.988 [INFO][5076] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" HandleID="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:17.989 [INFO][5076] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" HandleID="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"csi-node-driver-6n7x9", "timestamp":"2025-11-08 00:09:17.988648794 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:17.989 [INFO][5076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.047 [INFO][5076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.047 [INFO][5076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.100 [INFO][5076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.108 [INFO][5076] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.118 [INFO][5076] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.128 [INFO][5076] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.136 [INFO][5076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.136 [INFO][5076] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.148 [INFO][5076] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.155 [INFO][5076] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.166 [INFO][5076] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.136/26] block=192.168.105.128/26 handle="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.166 [INFO][5076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.136/26] handle="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.166 [INFO][5076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:18.222338 containerd[1590]: 2025-11-08 00:09:18.168 [INFO][5076] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.136/26] IPv6=[] ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" HandleID="k8s-pod-network.69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.222922 containerd[1590]: 2025-11-08 00:09:18.172 [INFO][5057] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57f11a43-3690-45d9-8837-b8df56bb1a07", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"csi-node-driver-6n7x9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41265c6d471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:18.222922 containerd[1590]: 2025-11-08 00:09:18.173 [INFO][5057] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.136/32] ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.222922 containerd[1590]: 2025-11-08 00:09:18.173 [INFO][5057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41265c6d471 ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.222922 containerd[1590]: 2025-11-08 00:09:18.176 [INFO][5057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.222922 containerd[1590]: 2025-11-08 00:09:18.190 [INFO][5057] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57f11a43-3690-45d9-8837-b8df56bb1a07", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e", Pod:"csi-node-driver-6n7x9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41265c6d471", MAC:"46:36:1e:4e:86:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:18.222922 containerd[1590]: 2025-11-08 00:09:18.211 [INFO][5057] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e" Namespace="calico-system" Pod="csi-node-driver-6n7x9" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:18.244923 containerd[1590]: time="2025-11-08T00:09:18.244812298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:18.245652 containerd[1590]: time="2025-11-08T00:09:18.244946620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:18.245652 containerd[1590]: time="2025-11-08T00:09:18.245023421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:18.245652 containerd[1590]: time="2025-11-08T00:09:18.245148383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:18.252208 containerd[1590]: time="2025-11-08T00:09:18.252023333Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:18.253612 containerd[1590]: time="2025-11-08T00:09:18.253454516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:18.253612 containerd[1590]: time="2025-11-08T00:09:18.253586399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:18.254172 kubelet[2774]: E1108 00:09:18.254127 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:18.254299 kubelet[2774]: E1108 00:09:18.254182 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:18.254475 kubelet[2774]: E1108 00:09:18.254397 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29lz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:18.257465 containerd[1590]: time="2025-11-08T00:09:18.255816994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:09:18.257574 kubelet[2774]: E1108 00:09:18.255831 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:18.300734 containerd[1590]: time="2025-11-08T00:09:18.300634834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6n7x9,Uid:57f11a43-3690-45d9-8837-b8df56bb1a07,Namespace:calico-system,Attempt:1,} returns sandbox id \"69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e\"" Nov 8 00:09:18.571273 containerd[1590]: time="2025-11-08T00:09:18.571156575Z" level=info msg="StopPodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\"" Nov 8 00:09:18.605018 containerd[1590]: time="2025-11-08T00:09:18.604164585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:18.607527 containerd[1590]: time="2025-11-08T00:09:18.607309516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:18.607527 containerd[1590]: time="2025-11-08T00:09:18.607316996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:09:18.607725 kubelet[2774]: E1108 00:09:18.607668 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:18.607826 kubelet[2774]: E1108 00:09:18.607727 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:18.609678 containerd[1590]: time="2025-11-08T00:09:18.609459710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:09:18.612323 kubelet[2774]: E1108 00:09:18.612220 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwxxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:18.613588 kubelet[2774]: E1108 00:09:18.613532 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.645 [INFO][5195] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.646 [INFO][5195] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" iface="eth0" netns="/var/run/netns/cni-a6e47c2c-e409-ba1e-64df-5859c916bae0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.648 [INFO][5195] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" iface="eth0" netns="/var/run/netns/cni-a6e47c2c-e409-ba1e-64df-5859c916bae0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.648 [INFO][5195] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" iface="eth0" netns="/var/run/netns/cni-a6e47c2c-e409-ba1e-64df-5859c916bae0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.648 [INFO][5195] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.648 [INFO][5195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.673 [INFO][5202] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.673 [INFO][5202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.674 [INFO][5202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.686 [WARNING][5202] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.686 [INFO][5202] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.689 [INFO][5202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:18.693081 containerd[1590]: 2025-11-08 00:09:18.691 [INFO][5195] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:18.694116 containerd[1590]: time="2025-11-08T00:09:18.693723023Z" level=info msg="TearDown network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" successfully" Nov 8 00:09:18.694116 containerd[1590]: time="2025-11-08T00:09:18.693772263Z" level=info msg="StopPodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" returns successfully" Nov 8 00:09:18.694987 containerd[1590]: time="2025-11-08T00:09:18.694572196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fsp2l,Uid:e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8,Namespace:kube-system,Attempt:1,}" Nov 8 00:09:18.841312 systemd[1]: run-netns-cni\x2da6e47c2c\x2de409\x2dba1e\x2d64df\x2d5859c916bae0.mount: Deactivated successfully. Nov 8 00:09:18.852684 systemd-networkd[1240]: caliada1cac9cdc: Link UP Nov 8 00:09:18.856750 systemd-networkd[1240]: caliada1cac9cdc: Gained carrier Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.755 [INFO][5209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0 coredns-668d6bf9bc- kube-system e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8 1023 0 2025-11-08 00:08:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-3f5a11d2fe coredns-668d6bf9bc-fsp2l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliada1cac9cdc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.755 [INFO][5209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.787 [INFO][5221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" HandleID="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.787 [INFO][5221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" HandleID="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-3f5a11d2fe", "pod":"coredns-668d6bf9bc-fsp2l", "timestamp":"2025-11-08 00:09:18.787457927 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3f5a11d2fe", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.787 [INFO][5221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.787 [INFO][5221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.787 [INFO][5221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3f5a11d2fe' Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.800 [INFO][5221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.808 [INFO][5221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.814 [INFO][5221] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.816 [INFO][5221] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.820 [INFO][5221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.820 [INFO][5221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.824 [INFO][5221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0 Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.835 [INFO][5221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.847 [INFO][5221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.137/26] block=192.168.105.128/26 handle="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.847 [INFO][5221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.137/26] handle="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" host="ci-4081-3-6-n-3f5a11d2fe" Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.847 [INFO][5221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:18.881894 containerd[1590]: 2025-11-08 00:09:18.847 [INFO][5221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.137/26] IPv6=[] ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" HandleID="k8s-pod-network.67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.882753 containerd[1590]: 2025-11-08 00:09:18.849 [INFO][5209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"", Pod:"coredns-668d6bf9bc-fsp2l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliada1cac9cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:18.882753 containerd[1590]: 2025-11-08 00:09:18.849 [INFO][5209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.137/32] ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.882753 containerd[1590]: 2025-11-08 00:09:18.849 [INFO][5209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliada1cac9cdc ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.882753 containerd[1590]: 2025-11-08 00:09:18.859 [INFO][5209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.882753 containerd[1590]: 2025-11-08 00:09:18.861 [INFO][5209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0", Pod:"coredns-668d6bf9bc-fsp2l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliada1cac9cdc", MAC:"5a:2f:62:6a:bb:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:18.882753 containerd[1590]: 2025-11-08 00:09:18.877 [INFO][5209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-fsp2l" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:18.910319 systemd-networkd[1240]: cali583f5271cb9: Gained IPv6LL Nov 8 00:09:18.928930 containerd[1590]: time="2025-11-08T00:09:18.924000599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:18.928930 containerd[1590]: time="2025-11-08T00:09:18.925695266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:18.928930 containerd[1590]: time="2025-11-08T00:09:18.925712546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:18.928930 containerd[1590]: time="2025-11-08T00:09:18.927140249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:18.954255 containerd[1590]: time="2025-11-08T00:09:18.954109802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:18.958347 containerd[1590]: time="2025-11-08T00:09:18.957932983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:09:18.958347 containerd[1590]: time="2025-11-08T00:09:18.958047625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:09:18.959500 kubelet[2774]: E1108 00:09:18.958897 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:18.959500 kubelet[2774]: E1108 00:09:18.958937 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:18.963137 kubelet[2774]: E1108 00:09:18.961252 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:18.983191 containerd[1590]: time="2025-11-08T00:09:18.983134108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:09:18.986260 kubelet[2774]: E1108 00:09:18.986197 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:19.011710 kubelet[2774]: E1108 00:09:19.011490 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:19.011710 kubelet[2774]: E1108 00:09:19.011575 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:19.094354 containerd[1590]: time="2025-11-08T00:09:19.093661538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fsp2l,Uid:e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8,Namespace:kube-system,Attempt:1,} returns sandbox id \"67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0\"" Nov 8 00:09:19.103503 systemd-networkd[1240]: calie9d8edad833: Gained IPv6LL Nov 8 00:09:19.114878 containerd[1590]: time="2025-11-08T00:09:19.114325402Z" level=info msg="CreateContainer within sandbox \"67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:09:19.142208 containerd[1590]: time="2025-11-08T00:09:19.142043184Z" level=info msg="CreateContainer within sandbox \"67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7db5aba9da7230c9d549fa000d620fffdec8596e922db19c23dd72269573ed7a\"" Nov 8 00:09:19.143686 containerd[1590]: time="2025-11-08T00:09:19.143547009Z" level=info msg="StartContainer for \"7db5aba9da7230c9d549fa000d620fffdec8596e922db19c23dd72269573ed7a\"" Nov 8 00:09:19.214262 containerd[1590]: time="2025-11-08T00:09:19.214224546Z" level=info msg="StartContainer for \"7db5aba9da7230c9d549fa000d620fffdec8596e922db19c23dd72269573ed7a\" returns successfully" Nov 8 00:09:19.359164 systemd-networkd[1240]: calibe0a6c27343: Gained IPv6LL Nov 8 00:09:19.382686 containerd[1590]: time="2025-11-08T00:09:19.382612950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:19.384657 containerd[1590]: time="2025-11-08T00:09:19.384597423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:09:19.385064 containerd[1590]: time="2025-11-08T00:09:19.384867908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:09:19.385342 kubelet[2774]: E1108 00:09:19.385293 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:19.385479 kubelet[2774]: E1108 00:09:19.385350 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:19.385534 kubelet[2774]: E1108 00:09:19.385475 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:19.386943 kubelet[2774]: E1108 00:09:19.386875 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:19.486967 systemd-networkd[1240]: califfeabce95f6: Gained IPv6LL Nov 8 00:09:19.870314 systemd-networkd[1240]: cali41265c6d471: Gained IPv6LL Nov 8 00:09:20.021330 kubelet[2774]: E1108 00:09:20.021247 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:20.025460 kubelet[2774]: E1108 00:09:20.024034 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:20.049271 kubelet[2774]: I1108 00:09:20.049036 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fsp2l" podStartSLOduration=50.04740461 podStartE2EDuration="50.04740461s" podCreationTimestamp="2025-11-08 00:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:09:20.046663397 +0000 UTC m=+56.607605621" watchObservedRunningTime="2025-11-08 00:09:20.04740461 +0000 UTC m=+56.608346834" Nov 8 00:09:20.254167 systemd-networkd[1240]: caliada1cac9cdc: Gained IPv6LL Nov 8 00:09:22.535455 systemd[1]: Started sshd@7-138.199.234.199:22-78.128.112.74:40758.service - OpenSSH per-connection server daemon (78.128.112.74:40758). Nov 8 00:09:22.681976 sshd[5330]: Invalid user admin from 78.128.112.74 port 40758 Nov 8 00:09:22.711459 sshd[5330]: Connection closed by invalid user admin 78.128.112.74 port 40758 [preauth] Nov 8 00:09:22.714163 systemd[1]: sshd@7-138.199.234.199:22-78.128.112.74:40758.service: Deactivated successfully. Nov 8 00:09:23.581179 containerd[1590]: time="2025-11-08T00:09:23.581136387Z" level=info msg="StopPodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\"" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.621 [WARNING][5345] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9506755c-a0f4-47f2-b269-7090f44df783", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc", Pod:"coredns-668d6bf9bc-tcqxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9d8edad833", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.621 [INFO][5345] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.621 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" iface="eth0" netns="" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.621 [INFO][5345] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.621 [INFO][5345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.642 [INFO][5353] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.643 [INFO][5353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.643 [INFO][5353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.655 [WARNING][5353] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.655 [INFO][5353] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.658 [INFO][5353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:23.662582 containerd[1590]: 2025-11-08 00:09:23.659 [INFO][5345] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.662582 containerd[1590]: time="2025-11-08T00:09:23.661865992Z" level=info msg="TearDown network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" successfully" Nov 8 00:09:23.662582 containerd[1590]: time="2025-11-08T00:09:23.661889273Z" level=info msg="StopPodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" returns successfully" Nov 8 00:09:23.664643 containerd[1590]: time="2025-11-08T00:09:23.663583945Z" level=info msg="RemovePodSandbox for \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\"" Nov 8 00:09:23.664643 containerd[1590]: time="2025-11-08T00:09:23.663625465Z" level=info msg="Forcibly stopping sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\"" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.704 [WARNING][5367] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9506755c-a0f4-47f2-b269-7090f44df783", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"95eff2e43459b8bc6c0b3c1ac0d9655effb62a36e9c41d901096b1eecb3df8fc", Pod:"coredns-668d6bf9bc-tcqxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9d8edad833", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.704 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.704 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" iface="eth0" netns="" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.704 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.704 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.726 [INFO][5374] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.727 [INFO][5374] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.727 [INFO][5374] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.736 [WARNING][5374] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.736 [INFO][5374] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" HandleID="k8s-pod-network.fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--tcqxl-eth0" Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.739 [INFO][5374] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:23.743095 containerd[1590]: 2025-11-08 00:09:23.741 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c" Nov 8 00:09:23.743794 containerd[1590]: time="2025-11-08T00:09:23.743069206Z" level=info msg="TearDown network for sandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" successfully" Nov 8 00:09:23.761049 containerd[1590]: time="2025-11-08T00:09:23.760871463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:23.761049 containerd[1590]: time="2025-11-08T00:09:23.760985425Z" level=info msg="RemovePodSandbox \"fc5a8350beb22d794e377f2ac2f63b977c6c8a425002ee6cb7b0e74775ec984c\" returns successfully" Nov 8 00:09:23.762011 containerd[1590]: time="2025-11-08T00:09:23.761804360Z" level=info msg="StopPodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\"" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.801 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffefce46-3638-4c95-bed3-200605f5f8d9", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399", Pod:"calico-apiserver-78c5874598-mj8jx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f20e48556f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.801 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.802 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" iface="eth0" netns="" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.802 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.802 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.831 [INFO][5396] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.831 [INFO][5396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.831 [INFO][5396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.847 [WARNING][5396] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.848 [INFO][5396] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.854 [INFO][5396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:23.863446 containerd[1590]: 2025-11-08 00:09:23.858 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.863446 containerd[1590]: time="2025-11-08T00:09:23.863319559Z" level=info msg="TearDown network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" successfully" Nov 8 00:09:23.863446 containerd[1590]: time="2025-11-08T00:09:23.863345919Z" level=info msg="StopPodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" returns successfully" Nov 8 00:09:23.866782 containerd[1590]: time="2025-11-08T00:09:23.866307615Z" level=info msg="RemovePodSandbox for \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\"" Nov 8 00:09:23.866782 containerd[1590]: time="2025-11-08T00:09:23.866344096Z" level=info msg="Forcibly stopping sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\"" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.925 [WARNING][5410] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffefce46-3638-4c95-bed3-200605f5f8d9", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"2e6d9c39ecc242d4b3c094aa51e4a807c3206e83f3feb47ccba4c37b4b365399", Pod:"calico-apiserver-78c5874598-mj8jx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f20e48556f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.925 [INFO][5410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.925 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" iface="eth0" netns="" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.925 [INFO][5410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.925 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.946 [INFO][5417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.946 [INFO][5417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.946 [INFO][5417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.959 [WARNING][5417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.959 [INFO][5417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" HandleID="k8s-pod-network.36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--mj8jx-eth0" Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.961 [INFO][5417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:23.967220 containerd[1590]: 2025-11-08 00:09:23.964 [INFO][5410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6" Nov 8 00:09:23.967868 containerd[1590]: time="2025-11-08T00:09:23.967270683Z" level=info msg="TearDown network for sandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" successfully" Nov 8 00:09:23.995470 containerd[1590]: time="2025-11-08T00:09:23.994231592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:23.995470 containerd[1590]: time="2025-11-08T00:09:23.994337674Z" level=info msg="RemovePodSandbox \"36a592faaec00eb6db33bc4c8bb3e0aec738c62d577d34c61b53802bf1858cb6\" returns successfully" Nov 8 00:09:23.997272 containerd[1590]: time="2025-11-08T00:09:23.996860002Z" level=info msg="StopPodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\"" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.053 [WARNING][5434] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0", GenerateName:"calico-apiserver-77bf6dfcdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02c3f902-7bf4-4824-923c-48ba4e1e389c", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf6dfcdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9", Pod:"calico-apiserver-77bf6dfcdd-hptwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43a17ef558a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.053 [INFO][5434] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.053 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" iface="eth0" netns="" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.053 [INFO][5434] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.053 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.085 [INFO][5441] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.085 [INFO][5441] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.085 [INFO][5441] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.097 [WARNING][5441] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.097 [INFO][5441] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.099 [INFO][5441] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.110053 containerd[1590]: 2025-11-08 00:09:24.104 [INFO][5434] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.112508 containerd[1590]: time="2025-11-08T00:09:24.110098518Z" level=info msg="TearDown network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" successfully" Nov 8 00:09:24.112508 containerd[1590]: time="2025-11-08T00:09:24.110132599Z" level=info msg="StopPodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" returns successfully" Nov 8 00:09:24.112508 containerd[1590]: time="2025-11-08T00:09:24.111081137Z" level=info msg="RemovePodSandbox for \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\"" Nov 8 00:09:24.112508 containerd[1590]: time="2025-11-08T00:09:24.111592387Z" level=info msg="Forcibly stopping sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\"" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.161 [WARNING][5455] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0", GenerateName:"calico-apiserver-77bf6dfcdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"02c3f902-7bf4-4824-923c-48ba4e1e389c", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf6dfcdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"cc4e3aa6bc30c24865e7d35120c8329b762cd0c046971cef0e915795de6b22e9", Pod:"calico-apiserver-77bf6dfcdd-hptwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43a17ef558a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.161 [INFO][5455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.162 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" iface="eth0" netns="" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.162 [INFO][5455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.162 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.188 [INFO][5462] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.188 [INFO][5462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.188 [INFO][5462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.203 [WARNING][5462] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.203 [INFO][5462] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" HandleID="k8s-pod-network.949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--77bf6dfcdd--hptwz-eth0" Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.207 [INFO][5462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.213152 containerd[1590]: 2025-11-08 00:09:24.210 [INFO][5455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7" Nov 8 00:09:24.215337 containerd[1590]: time="2025-11-08T00:09:24.213201999Z" level=info msg="TearDown network for sandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" successfully" Nov 8 00:09:24.218371 containerd[1590]: time="2025-11-08T00:09:24.217927691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:24.218371 containerd[1590]: time="2025-11-08T00:09:24.218205696Z" level=info msg="RemovePodSandbox \"949e4aedbcac01b24e7f7a5332a768e1ba8126743db00267b264151c7b75fee7\" returns successfully" Nov 8 00:09:24.220093 containerd[1590]: time="2025-11-08T00:09:24.220055052Z" level=info msg="StopPodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\"" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.263 [WARNING][5476] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.263 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.263 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" iface="eth0" netns="" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.263 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.263 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.284 [INFO][5484] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.284 [INFO][5484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.284 [INFO][5484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.295 [WARNING][5484] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.295 [INFO][5484] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.298 [INFO][5484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.302329 containerd[1590]: 2025-11-08 00:09:24.300 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.303032 containerd[1590]: time="2025-11-08T00:09:24.302380690Z" level=info msg="TearDown network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" successfully" Nov 8 00:09:24.303032 containerd[1590]: time="2025-11-08T00:09:24.302411931Z" level=info msg="StopPodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" returns successfully" Nov 8 00:09:24.305071 containerd[1590]: time="2025-11-08T00:09:24.305025262Z" level=info msg="RemovePodSandbox for \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\"" Nov 8 00:09:24.305071 containerd[1590]: time="2025-11-08T00:09:24.305074463Z" level=info msg="Forcibly stopping sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\"" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.356 [WARNING][5498] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" WorkloadEndpoint="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.356 [INFO][5498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.356 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" iface="eth0" netns="" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.356 [INFO][5498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.356 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.381 [INFO][5506] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.381 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.381 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.394 [WARNING][5506] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.394 [INFO][5506] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" HandleID="k8s-pod-network.e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-whisker--68f9d864b5--mqg6n-eth0" Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.400 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.404039 containerd[1590]: 2025-11-08 00:09:24.402 [INFO][5498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b" Nov 8 00:09:24.405251 containerd[1590]: time="2025-11-08T00:09:24.404076585Z" level=info msg="TearDown network for sandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" successfully" Nov 8 00:09:24.410570 containerd[1590]: time="2025-11-08T00:09:24.410507909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:24.410570 containerd[1590]: time="2025-11-08T00:09:24.410577031Z" level=info msg="RemovePodSandbox \"e4948dcf06bd8ad3d135810afd0081148af97c78a25df987ddae49bce079e45b\" returns successfully" Nov 8 00:09:24.411207 containerd[1590]: time="2025-11-08T00:09:24.411175962Z" level=info msg="StopPodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\"" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.460 [WARNING][5521] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0", GenerateName:"calico-kube-controllers-54b5cccc46-", Namespace:"calico-system", SelfLink:"", UID:"5c488e78-d3ba-4197-ab37-75734ccb9129", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b5cccc46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40", Pod:"calico-kube-controllers-54b5cccc46-bcg68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali583f5271cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.461 [INFO][5521] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.461 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" iface="eth0" netns="" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.461 [INFO][5521] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.461 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.484 [INFO][5528] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.485 [INFO][5528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.485 [INFO][5528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.498 [WARNING][5528] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.498 [INFO][5528] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.501 [INFO][5528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.507475 containerd[1590]: 2025-11-08 00:09:24.504 [INFO][5521] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.507475 containerd[1590]: time="2025-11-08T00:09:24.507440911Z" level=info msg="TearDown network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" successfully" Nov 8 00:09:24.507475 containerd[1590]: time="2025-11-08T00:09:24.507476032Z" level=info msg="StopPodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" returns successfully" Nov 8 00:09:24.509662 containerd[1590]: time="2025-11-08T00:09:24.508171045Z" level=info msg="RemovePodSandbox for \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\"" Nov 8 00:09:24.509662 containerd[1590]: time="2025-11-08T00:09:24.509228146Z" level=info msg="Forcibly stopping sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\"" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.550 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0", GenerateName:"calico-kube-controllers-54b5cccc46-", Namespace:"calico-system", SelfLink:"", UID:"5c488e78-d3ba-4197-ab37-75734ccb9129", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b5cccc46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"a4a953cca3be936efbe327756bbc753349496b123b27490eaf8abd7637738d40", Pod:"calico-kube-controllers-54b5cccc46-bcg68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali583f5271cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.550 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.550 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" iface="eth0" netns="" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.550 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.550 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.582 [INFO][5549] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.582 [INFO][5549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.582 [INFO][5549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.593 [WARNING][5549] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.593 [INFO][5549] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" HandleID="k8s-pod-network.b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--kube--controllers--54b5cccc46--bcg68-eth0" Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.596 [INFO][5549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.601584 containerd[1590]: 2025-11-08 00:09:24.599 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04" Nov 8 00:09:24.601584 containerd[1590]: time="2025-11-08T00:09:24.601072089Z" level=info msg="TearDown network for sandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" successfully" Nov 8 00:09:24.610415 containerd[1590]: time="2025-11-08T00:09:24.610239227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:24.610415 containerd[1590]: time="2025-11-08T00:09:24.610317428Z" level=info msg="RemovePodSandbox \"b682bac87db032dc9f48cac2e01c3118a3c1ada1ba87e5a4acedae813089cc04\" returns successfully" Nov 8 00:09:24.612106 containerd[1590]: time="2025-11-08T00:09:24.612073902Z" level=info msg="StopPodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\"" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.656 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"72f7d776-6bd7-4d33-8b73-a5febd833bf0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a", Pod:"goldmane-666569f655-l7kch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfeabce95f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.656 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.658 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" iface="eth0" netns="" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.658 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.658 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.682 [INFO][5571] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.682 [INFO][5571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.682 [INFO][5571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.691 [WARNING][5571] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.692 [INFO][5571] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.694 [INFO][5571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.699551 containerd[1590]: 2025-11-08 00:09:24.695 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.700769 containerd[1590]: time="2025-11-08T00:09:24.699635122Z" level=info msg="TearDown network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" successfully" Nov 8 00:09:24.700769 containerd[1590]: time="2025-11-08T00:09:24.699700403Z" level=info msg="StopPodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" returns successfully" Nov 8 00:09:24.701272 containerd[1590]: time="2025-11-08T00:09:24.701206272Z" level=info msg="RemovePodSandbox for \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\"" Nov 8 00:09:24.701387 containerd[1590]: time="2025-11-08T00:09:24.701281074Z" level=info msg="Forcibly stopping sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\"" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.765 [WARNING][5585] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"72f7d776-6bd7-4d33-8b73-a5febd833bf0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"00d7b74413141725e4feb98fde663a9673437854dd1aef95e2b9059fa3ac318a", Pod:"goldmane-666569f655-l7kch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfeabce95f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.766 [INFO][5585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.766 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" iface="eth0" netns="" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.766 [INFO][5585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.766 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.787 [INFO][5593] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.787 [INFO][5593] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.787 [INFO][5593] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.799 [WARNING][5593] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.799 [INFO][5593] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" HandleID="k8s-pod-network.68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-goldmane--666569f655--l7kch-eth0" Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.801 [INFO][5593] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.811110 containerd[1590]: 2025-11-08 00:09:24.806 [INFO][5585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035" Nov 8 00:09:24.811536 containerd[1590]: time="2025-11-08T00:09:24.811102726Z" level=info msg="TearDown network for sandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" successfully" Nov 8 00:09:24.817256 containerd[1590]: time="2025-11-08T00:09:24.817208564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:24.817539 containerd[1590]: time="2025-11-08T00:09:24.817278326Z" level=info msg="RemovePodSandbox \"68664efddab9c1b45fc956ca05db227cba69ee775ea8b52f0b0102fbb93cd035\" returns successfully" Nov 8 00:09:24.818000 containerd[1590]: time="2025-11-08T00:09:24.817831256Z" level=info msg="StopPodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\"" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.860 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57f11a43-3690-45d9-8837-b8df56bb1a07", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e", Pod:"csi-node-driver-6n7x9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41265c6d471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.861 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.861 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" iface="eth0" netns="" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.861 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.861 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.885 [INFO][5614] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.885 [INFO][5614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.885 [INFO][5614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.900 [WARNING][5614] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.900 [INFO][5614] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.903 [INFO][5614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:24.909178 containerd[1590]: 2025-11-08 00:09:24.906 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:24.910770 containerd[1590]: time="2025-11-08T00:09:24.909809922Z" level=info msg="TearDown network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" successfully" Nov 8 00:09:24.910770 containerd[1590]: time="2025-11-08T00:09:24.909935484Z" level=info msg="StopPodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" returns successfully" Nov 8 00:09:24.911721 containerd[1590]: time="2025-11-08T00:09:24.911410193Z" level=info msg="RemovePodSandbox for \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\"" Nov 8 00:09:24.911721 containerd[1590]: time="2025-11-08T00:09:24.911443194Z" level=info msg="Forcibly stopping sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\"" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.956 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57f11a43-3690-45d9-8837-b8df56bb1a07", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"69bcc48cf697b7077894e44557d361baf402dbb1d8099e85ec9a04929a99a95e", Pod:"csi-node-driver-6n7x9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali41265c6d471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.956 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.956 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" iface="eth0" netns="" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.957 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.957 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.982 [INFO][5636] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.982 [INFO][5636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.982 [INFO][5636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.992 [WARNING][5636] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.992 [INFO][5636] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" HandleID="k8s-pod-network.9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-csi--node--driver--6n7x9-eth0" Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.994 [INFO][5636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:25.000015 containerd[1590]: 2025-11-08 00:09:24.997 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e" Nov 8 00:09:25.000015 containerd[1590]: time="2025-11-08T00:09:24.999742108Z" level=info msg="TearDown network for sandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" successfully" Nov 8 00:09:25.008093 containerd[1590]: time="2025-11-08T00:09:25.007857149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:25.008093 containerd[1590]: time="2025-11-08T00:09:25.007940550Z" level=info msg="RemovePodSandbox \"9391dabe2f9e1614f2bd53f73d7770ce0506177917ffcc4c402a2c7a1c9efc4e\" returns successfully" Nov 8 00:09:25.009289 containerd[1590]: time="2025-11-08T00:09:25.008441680Z" level=info msg="StopPodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\"" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.076 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec2eb36-2470-4386-96a3-fe6dd8fc602f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3", Pod:"calico-apiserver-78c5874598-gtq72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe0a6c27343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.077 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.077 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" iface="eth0" netns="" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.077 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.077 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.104 [INFO][5657] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.104 [INFO][5657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.104 [INFO][5657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.115 [WARNING][5657] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.115 [INFO][5657] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.117 [INFO][5657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:25.121391 containerd[1590]: 2025-11-08 00:09:25.119 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.123483 containerd[1590]: time="2025-11-08T00:09:25.121393849Z" level=info msg="TearDown network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" successfully" Nov 8 00:09:25.123483 containerd[1590]: time="2025-11-08T00:09:25.121430410Z" level=info msg="StopPodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" returns successfully" Nov 8 00:09:25.123483 containerd[1590]: time="2025-11-08T00:09:25.122853478Z" level=info msg="RemovePodSandbox for \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\"" Nov 8 00:09:25.123625 containerd[1590]: time="2025-11-08T00:09:25.123310648Z" level=info msg="Forcibly stopping sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\"" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.169 [WARNING][5672] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0", GenerateName:"calico-apiserver-78c5874598-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec2eb36-2470-4386-96a3-fe6dd8fc602f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5874598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"4fcfe846c0a21f78a890100e943d899dd74c3ab1876ad66f51643d8a65b169f3", Pod:"calico-apiserver-78c5874598-gtq72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe0a6c27343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.169 [INFO][5672] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.169 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" iface="eth0" netns="" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.169 [INFO][5672] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.169 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.189 [INFO][5679] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.189 [INFO][5679] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.189 [INFO][5679] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.201 [WARNING][5679] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.202 [INFO][5679] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" HandleID="k8s-pod-network.4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-calico--apiserver--78c5874598--gtq72-eth0" Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.205 [INFO][5679] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:25.209899 containerd[1590]: 2025-11-08 00:09:25.207 [INFO][5672] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed" Nov 8 00:09:25.210474 containerd[1590]: time="2025-11-08T00:09:25.209968533Z" level=info msg="TearDown network for sandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" successfully" Nov 8 00:09:25.214841 containerd[1590]: time="2025-11-08T00:09:25.214744028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:25.214841 containerd[1590]: time="2025-11-08T00:09:25.214814470Z" level=info msg="RemovePodSandbox \"4243a8ef083fd976a6989b8bd3b8302d06cfd8e89bf1598e6e7cd038d1a143ed\" returns successfully" Nov 8 00:09:25.216234 containerd[1590]: time="2025-11-08T00:09:25.216196097Z" level=info msg="StopPodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\"" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.281 [WARNING][5693] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0", Pod:"coredns-668d6bf9bc-fsp2l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliada1cac9cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.282 [INFO][5693] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.282 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" iface="eth0" netns="" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.282 [INFO][5693] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.282 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.303 [INFO][5701] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.303 [INFO][5701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.303 [INFO][5701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.313 [WARNING][5701] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.313 [INFO][5701] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.322 [INFO][5701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:25.328115 containerd[1590]: 2025-11-08 00:09:25.325 [INFO][5693] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.329899 containerd[1590]: time="2025-11-08T00:09:25.328142886Z" level=info msg="TearDown network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" successfully" Nov 8 00:09:25.329899 containerd[1590]: time="2025-11-08T00:09:25.328188647Z" level=info msg="StopPodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" returns successfully" Nov 8 00:09:25.329899 containerd[1590]: time="2025-11-08T00:09:25.328867061Z" level=info msg="RemovePodSandbox for \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\"" Nov 8 00:09:25.329899 containerd[1590]: time="2025-11-08T00:09:25.328942782Z" level=info msg="Forcibly stopping sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\"" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.371 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1e70c0c-dd7e-4d32-9c6b-cbad678c0ea8", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3f5a11d2fe", ContainerID:"67de42a4f205fac69203197375bc12ad049ee832a213800691c6db1ac5bfb8f0", Pod:"coredns-668d6bf9bc-fsp2l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliada1cac9cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.372 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.372 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" iface="eth0" netns="" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.372 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.372 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.393 [INFO][5722] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.393 [INFO][5722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.393 [INFO][5722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.407 [WARNING][5722] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.407 [INFO][5722] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" HandleID="k8s-pod-network.3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Workload="ci--4081--3--6--n--3f5a11d2fe-k8s-coredns--668d6bf9bc--fsp2l-eth0" Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.409 [INFO][5722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:25.414900 containerd[1590]: 2025-11-08 00:09:25.412 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b" Nov 8 00:09:25.414900 containerd[1590]: time="2025-11-08T00:09:25.414881414Z" level=info msg="TearDown network for sandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" successfully" Nov 8 00:09:25.418483 containerd[1590]: time="2025-11-08T00:09:25.418430524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:25.418602 containerd[1590]: time="2025-11-08T00:09:25.418546526Z" level=info msg="RemovePodSandbox \"3f7d2e49c59770edab866315714f904ff7ee8a1dff0c57583003ea9272025e8b\" returns successfully" Nov 8 00:09:25.572737 containerd[1590]: time="2025-11-08T00:09:25.572664315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:09:25.918360 containerd[1590]: time="2025-11-08T00:09:25.918107274Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:25.920347 containerd[1590]: time="2025-11-08T00:09:25.920051833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:09:25.920347 containerd[1590]: time="2025-11-08T00:09:25.920258477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:09:25.920609 kubelet[2774]: E1108 00:09:25.920518 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:25.920609 kubelet[2774]: E1108 00:09:25.920595 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:25.921208 kubelet[2774]: E1108 00:09:25.920737 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b49b9874aa24b60b25840c8ea795204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:25.924460 containerd[1590]: time="2025-11-08T00:09:25.924364399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:09:26.269721 containerd[1590]: time="2025-11-08T00:09:26.269655845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:26.272622 containerd[1590]: time="2025-11-08T00:09:26.272457902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:09:26.272622 containerd[1590]: time="2025-11-08T00:09:26.272546824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:26.272942 kubelet[2774]: E1108 00:09:26.272799 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:26.272942 kubelet[2774]: E1108 00:09:26.272875 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:26.273476 kubelet[2774]: E1108 00:09:26.273054 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:26.274385 kubelet[2774]: E1108 00:09:26.274235 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:09:29.573421 containerd[1590]: time="2025-11-08T00:09:29.572931600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:29.917139 containerd[1590]: time="2025-11-08T00:09:29.916909166Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:29.918758 containerd[1590]: time="2025-11-08T00:09:29.918662285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:29.918895 containerd[1590]: time="2025-11-08T00:09:29.918814248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:29.919089 kubelet[2774]: E1108 00:09:29.919034 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:29.919668 kubelet[2774]: E1108 00:09:29.919106 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:29.919668 kubelet[2774]: E1108 00:09:29.919307 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29lz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:29.921114 kubelet[2774]: E1108 00:09:29.921033 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:30.572064 containerd[1590]: time="2025-11-08T00:09:30.571411134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:09:30.900744 containerd[1590]: time="2025-11-08T00:09:30.900533958Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:30.904208 containerd[1590]: time="2025-11-08T00:09:30.904129998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:09:30.905009 containerd[1590]: time="2025-11-08T00:09:30.904253201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:30.905168 kubelet[2774]: E1108 00:09:30.904433 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:30.905168 kubelet[2774]: E1108 00:09:30.904482 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:30.905168 kubelet[2774]: E1108 00:09:30.904627 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwxxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:30.906629 kubelet[2774]: E1108 00:09:30.906572 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:31.572521 containerd[1590]: time="2025-11-08T00:09:31.571118075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:31.949124 containerd[1590]: time="2025-11-08T00:09:31.949037298Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:31.950864 containerd[1590]: time="2025-11-08T00:09:31.950798177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:31.951008 containerd[1590]: time="2025-11-08T00:09:31.950974381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:31.951275 kubelet[2774]: E1108 00:09:31.951213 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:31.952949 kubelet[2774]: E1108 00:09:31.951307 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:31.952949 kubelet[2774]: E1108 00:09:31.951536 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhdmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:31.953083 kubelet[2774]: E1108 00:09:31.953027 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:32.579585 containerd[1590]: time="2025-11-08T00:09:32.579315536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:32.921023 containerd[1590]: time="2025-11-08T00:09:32.920604588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:32.922730 containerd[1590]: time="2025-11-08T00:09:32.922661195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:32.923004 containerd[1590]: time="2025-11-08T00:09:32.922737397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:32.923325 kubelet[2774]: E1108 00:09:32.923283 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:32.923440 kubelet[2774]: E1108 00:09:32.923336 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:32.923510 kubelet[2774]: E1108 00:09:32.923462 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf6dfcdd-hptwz_calico-apiserver(02c3f902-7bf4-4824-923c-48ba4e1e389c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:32.925338 kubelet[2774]: E1108 00:09:32.924990 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:09:33.573342 containerd[1590]: time="2025-11-08T00:09:33.571722589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:09:33.934555 containerd[1590]: time="2025-11-08T00:09:33.934340552Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:33.938717 containerd[1590]: time="2025-11-08T00:09:33.936125993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:09:33.938717 containerd[1590]: time="2025-11-08T00:09:33.936362799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:33.938994 kubelet[2774]: E1108 00:09:33.936607 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:33.938994 kubelet[2774]: E1108 00:09:33.936657 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:33.938994 kubelet[2774]: E1108 00:09:33.936858 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pghwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b5cccc46-bcg68_calico-system(5c488e78-d3ba-4197-ab37-75734ccb9129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:33.939819 containerd[1590]: time="2025-11-08T00:09:33.939567474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:09:33.946184 kubelet[2774]: E1108 00:09:33.944482 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:34.312546 containerd[1590]: time="2025-11-08T00:09:34.312475475Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:34.314072 containerd[1590]: time="2025-11-08T00:09:34.313926629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:09:34.314072 containerd[1590]: time="2025-11-08T00:09:34.314016032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:09:34.315767 kubelet[2774]: E1108 00:09:34.314250 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:34.315767 kubelet[2774]: E1108 00:09:34.314309 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:34.315767 kubelet[2774]: E1108 00:09:34.314459 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:34.317506 containerd[1590]: time="2025-11-08T00:09:34.317276269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:09:34.692220 containerd[1590]: time="2025-11-08T00:09:34.692058298Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:34.693844 containerd[1590]: time="2025-11-08T00:09:34.693691777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:09:34.693844 containerd[1590]: time="2025-11-08T00:09:34.693790699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:09:34.694039 kubelet[2774]: E1108 00:09:34.693938 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:34.694039 kubelet[2774]: E1108 00:09:34.694018 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:34.694395 kubelet[2774]: E1108 00:09:34.694147 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:34.695570 kubelet[2774]: E1108 00:09:34.695461 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:41.572435 kubelet[2774]: E1108 00:09:41.572363 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:09:42.572042 kubelet[2774]: E1108 00:09:42.571935 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:43.573631 kubelet[2774]: E1108 00:09:43.573583 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:44.572333 kubelet[2774]: E1108 00:09:44.571500 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:09:45.575996 kubelet[2774]: E1108 00:09:45.574490 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:47.580042 kubelet[2774]: E1108 00:09:47.579934 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:09:49.571058 kubelet[2774]: E1108 00:09:49.571003 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:09:53.575699 containerd[1590]: time="2025-11-08T00:09:53.575644191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:09:53.927765 containerd[1590]: time="2025-11-08T00:09:53.927414411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:53.930240 containerd[1590]: time="2025-11-08T00:09:53.930177091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:09:53.930348 containerd[1590]: time="2025-11-08T00:09:53.930307575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:53.931454 kubelet[2774]: E1108 00:09:53.930540 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:53.931454 kubelet[2774]: E1108 00:09:53.930600 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:53.931454 kubelet[2774]: E1108 00:09:53.930720 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwxxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:53.932298 kubelet[2774]: E1108 00:09:53.932241 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:09:54.580508 containerd[1590]: time="2025-11-08T00:09:54.579999486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:54.932377 containerd[1590]: time="2025-11-08T00:09:54.932244710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:54.934100 containerd[1590]: time="2025-11-08T00:09:54.934044603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:54.935012 containerd[1590]: time="2025-11-08T00:09:54.934170366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:54.935124 kubelet[2774]: E1108 00:09:54.934336 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:54.935124 kubelet[2774]: E1108 00:09:54.934388 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:54.935124 kubelet[2774]: E1108 00:09:54.934527 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhdmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:54.935901 kubelet[2774]: E1108 00:09:54.935769 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:09:55.574446 containerd[1590]: time="2025-11-08T00:09:55.574407166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:09:55.913358 containerd[1590]: time="2025-11-08T00:09:55.912885533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:55.922828 containerd[1590]: time="2025-11-08T00:09:55.922720342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:09:55.922828 containerd[1590]: time="2025-11-08T00:09:55.922790224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:09:55.923476 kubelet[2774]: E1108 00:09:55.923061 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:55.923476 kubelet[2774]: E1108 00:09:55.923133 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:55.923476 kubelet[2774]: E1108 00:09:55.923271 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b49b9874aa24b60b25840c8ea795204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:55.925743 containerd[1590]: time="2025-11-08T00:09:55.925679589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:09:56.273010 containerd[1590]: time="2025-11-08T00:09:56.272386969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:56.274408 containerd[1590]: time="2025-11-08T00:09:56.274246224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:09:56.274408 containerd[1590]: time="2025-11-08T00:09:56.274363668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:56.274830 kubelet[2774]: E1108 00:09:56.274783 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:56.274830 kubelet[2774]: E1108 00:09:56.274836 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:56.275320 kubelet[2774]: E1108 00:09:56.274949 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:56.277016 kubelet[2774]: E1108 00:09:56.276943 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:09:58.573194 containerd[1590]: time="2025-11-08T00:09:58.573142853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:58.939158 containerd[1590]: time="2025-11-08T00:09:58.938738417Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:58.940974 containerd[1590]: time="2025-11-08T00:09:58.940817160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:58.940974 containerd[1590]: time="2025-11-08T00:09:58.940891882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:58.943309 kubelet[2774]: E1108 00:09:58.941296 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:58.943309 kubelet[2774]: E1108 00:09:58.941351 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:58.943309 kubelet[2774]: E1108 00:09:58.941497 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29lz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:58.944140 kubelet[2774]: E1108 00:09:58.943973 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:09:59.574291 containerd[1590]: time="2025-11-08T00:09:59.574004206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:59.918129 containerd[1590]: time="2025-11-08T00:09:59.917822136Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:59.920680 containerd[1590]: time="2025-11-08T00:09:59.920398094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:59.920680 containerd[1590]: time="2025-11-08T00:09:59.920462855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:59.922136 kubelet[2774]: E1108 00:09:59.921444 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:59.922136 kubelet[2774]: E1108 00:09:59.921514 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:59.922136 kubelet[2774]: E1108 00:09:59.921645 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf6dfcdd-hptwz_calico-apiserver(02c3f902-7bf4-4824-923c-48ba4e1e389c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:59.923496 kubelet[2774]: E1108 00:09:59.923461 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:10:00.570914 containerd[1590]: time="2025-11-08T00:10:00.570808844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:10:00.917928 containerd[1590]: time="2025-11-08T00:10:00.917782846Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:00.921047 containerd[1590]: time="2025-11-08T00:10:00.920218280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:10:00.921047 containerd[1590]: time="2025-11-08T00:10:00.920438126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:00.921454 kubelet[2774]: E1108 00:10:00.921179 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:00.921454 kubelet[2774]: E1108 00:10:00.921237 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:00.921454 kubelet[2774]: E1108 00:10:00.921364 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pghwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b5cccc46-bcg68_calico-system(5c488e78-d3ba-4197-ab37-75734ccb9129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:00.923204 kubelet[2774]: E1108 00:10:00.922608 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:10:02.572843 containerd[1590]: time="2025-11-08T00:10:02.572774172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:10:02.959054 containerd[1590]: time="2025-11-08T00:10:02.958729238Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:02.960917 containerd[1590]: time="2025-11-08T00:10:02.960484532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:10:02.960917 containerd[1590]: time="2025-11-08T00:10:02.960627376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:10:02.961648 kubelet[2774]: E1108 00:10:02.961459 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:02.961648 kubelet[2774]: E1108 00:10:02.961545 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:02.965674 kubelet[2774]: E1108 00:10:02.961677 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:02.968775 containerd[1590]: time="2025-11-08T00:10:02.968373933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:10:03.347305 containerd[1590]: time="2025-11-08T00:10:03.346249403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:03.349665 containerd[1590]: time="2025-11-08T00:10:03.349376619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:10:03.349665 containerd[1590]: time="2025-11-08T00:10:03.349416700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:10:03.350142 kubelet[2774]: E1108 00:10:03.349942 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:03.350142 kubelet[2774]: E1108 00:10:03.350117 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:03.350481 kubelet[2774]: E1108 00:10:03.350433 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:03.351978 kubelet[2774]: E1108 00:10:03.351882 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:10:08.572438 kubelet[2774]: E1108 00:10:08.571310 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:10:08.572438 kubelet[2774]: E1108 00:10:08.571917 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:10:09.577111 kubelet[2774]: E1108 00:10:09.576976 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:10:11.573982 kubelet[2774]: E1108 00:10:11.571524 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:10:12.571792 kubelet[2774]: E1108 00:10:12.571459 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:10:14.571323 kubelet[2774]: E1108 00:10:14.571165 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:10:16.590298 kubelet[2774]: E1108 00:10:16.590144 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:10:20.572080 kubelet[2774]: E1108 00:10:20.571948 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:10:21.573610 kubelet[2774]: E1108 00:10:21.573534 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:10:22.571980 kubelet[2774]: E1108 00:10:22.570852 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:10:22.573215 kubelet[2774]: E1108 00:10:22.573166 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:10:23.575252 kubelet[2774]: E1108 00:10:23.573907 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:10:26.573396 kubelet[2774]: E1108 00:10:26.572895 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:10:28.572694 kubelet[2774]: E1108 00:10:28.572518 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:10:32.570731 kubelet[2774]: E1108 00:10:32.570469 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:10:33.571990 kubelet[2774]: E1108 00:10:33.571920 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:10:33.573537 kubelet[2774]: E1108 00:10:33.572847 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:10:34.572073 kubelet[2774]: E1108 00:10:34.571016 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:10:36.572621 kubelet[2774]: E1108 00:10:36.572567 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:10:40.571372 kubelet[2774]: E1108 00:10:40.570936 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:10:41.572717 kubelet[2774]: E1108 00:10:41.572599 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:10:44.573034 containerd[1590]: time="2025-11-08T00:10:44.572975025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:10:44.926678 containerd[1590]: time="2025-11-08T00:10:44.926521226Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:44.930397 containerd[1590]: time="2025-11-08T00:10:44.930334556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:10:44.931327 containerd[1590]: time="2025-11-08T00:10:44.930462401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:10:44.931406 kubelet[2774]: E1108 00:10:44.930633 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:10:44.931406 kubelet[2774]: E1108 00:10:44.930709 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:10:44.931406 kubelet[2774]: E1108 00:10:44.930840 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8b49b9874aa24b60b25840c8ea795204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:44.936032 containerd[1590]: time="2025-11-08T00:10:44.935017796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:10:45.267173 containerd[1590]: time="2025-11-08T00:10:45.267104034Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:45.269089 containerd[1590]: time="2025-11-08T00:10:45.268936937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:45.269089 containerd[1590]: time="2025-11-08T00:10:45.268990579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:10:45.269839 kubelet[2774]: E1108 00:10:45.269591 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:10:45.269839 kubelet[2774]: E1108 00:10:45.269645 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:10:45.270191 kubelet[2774]: E1108 00:10:45.269803 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5shx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-579fd6c6df-bcwhq_calico-system(3005ab6c-6899-4c7c-9cae-4c79f44757c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:45.271775 kubelet[2774]: E1108 00:10:45.271724 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:10:45.578763 containerd[1590]: time="2025-11-08T00:10:45.575522465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:45.920502 containerd[1590]: time="2025-11-08T00:10:45.919974649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:45.922485 containerd[1590]: time="2025-11-08T00:10:45.922169164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:45.922485 containerd[1590]: time="2025-11-08T00:10:45.922408012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:45.922722 kubelet[2774]: E1108 00:10:45.922652 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:45.922794 kubelet[2774]: E1108 00:10:45.922734 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:45.924332 kubelet[2774]: E1108 00:10:45.924234 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhdmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:45.925526 kubelet[2774]: E1108 00:10:45.925469 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:10:46.572304 containerd[1590]: time="2025-11-08T00:10:46.571799689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:10:46.903886 containerd[1590]: time="2025-11-08T00:10:46.902933190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:46.907160 containerd[1590]: time="2025-11-08T00:10:46.906199382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:10:46.907160 containerd[1590]: time="2025-11-08T00:10:46.906379028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:46.907325 kubelet[2774]: E1108 00:10:46.906829 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:10:46.907325 kubelet[2774]: E1108 00:10:46.906902 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:10:46.907325 kubelet[2774]: E1108 00:10:46.907225 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwxxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:46.910308 kubelet[2774]: E1108 00:10:46.910235 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:10:47.574167 containerd[1590]: time="2025-11-08T00:10:47.574088437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:47.946366 containerd[1590]: time="2025-11-08T00:10:47.945715258Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:47.949888 containerd[1590]: time="2025-11-08T00:10:47.948998450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:47.949888 containerd[1590]: time="2025-11-08T00:10:47.949064013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:47.950093 kubelet[2774]: E1108 00:10:47.949250 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:47.950093 kubelet[2774]: E1108 00:10:47.949304 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:47.950093 kubelet[2774]: E1108 00:10:47.949424 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf6dfcdd-hptwz_calico-apiserver(02c3f902-7bf4-4824-923c-48ba4e1e389c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:47.951107 kubelet[2774]: E1108 00:10:47.950992 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:10:48.571648 containerd[1590]: time="2025-11-08T00:10:48.571440171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:48.934029 containerd[1590]: time="2025-11-08T00:10:48.933598481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:48.935208 containerd[1590]: time="2025-11-08T00:10:48.935162974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:48.935208 containerd[1590]: time="2025-11-08T00:10:48.935301059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:48.935540 kubelet[2774]: E1108 00:10:48.935370 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:48.935540 kubelet[2774]: E1108 00:10:48.935418 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:48.935693 kubelet[2774]: E1108 00:10:48.935531 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29lz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:48.937003 kubelet[2774]: E1108 00:10:48.936965 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:10:52.574865 containerd[1590]: time="2025-11-08T00:10:52.573140633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:10:52.926575 containerd[1590]: time="2025-11-08T00:10:52.926440685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:52.928899 containerd[1590]: time="2025-11-08T00:10:52.928834607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:10:52.928899 containerd[1590]: time="2025-11-08T00:10:52.928936371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:10:52.930166 kubelet[2774]: E1108 00:10:52.930097 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:52.930166 kubelet[2774]: E1108 00:10:52.930146 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:52.930869 kubelet[2774]: E1108 00:10:52.930276 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:52.934065 containerd[1590]: time="2025-11-08T00:10:52.933188717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:10:53.293924 containerd[1590]: time="2025-11-08T00:10:53.293551378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:53.295492 containerd[1590]: time="2025-11-08T00:10:53.295263747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:10:53.295771 kubelet[2774]: E1108 00:10:53.295721 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:53.295850 kubelet[2774]: E1108 00:10:53.295777 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:53.295931 kubelet[2774]: E1108 00:10:53.295892 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6n7x9_calico-system(57f11a43-3690-45d9-8837-b8df56bb1a07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:53.296987 containerd[1590]: time="2025-11-08T00:10:53.295410271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:10:53.297172 kubelet[2774]: E1108 00:10:53.297120 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:10:54.573301 containerd[1590]: time="2025-11-08T00:10:54.573256451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:10:54.900148 containerd[1590]: time="2025-11-08T00:10:54.899748220Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:54.904155 containerd[1590]: time="2025-11-08T00:10:54.904034408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:10:54.904155 containerd[1590]: time="2025-11-08T00:10:54.904103127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:54.904659 kubelet[2774]: E1108 00:10:54.904613 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:54.905041 kubelet[2774]: E1108 00:10:54.904672 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:54.905041 kubelet[2774]: E1108 00:10:54.904804 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pghwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b5cccc46-bcg68_calico-system(5c488e78-d3ba-4197-ab37-75734ccb9129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:54.906997 kubelet[2774]: E1108 00:10:54.906939 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:10:56.575017 kubelet[2774]: E1108 00:10:56.574122 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:10:58.577970 kubelet[2774]: E1108 00:10:58.576703 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:11:01.853337 systemd[1]: Started sshd@8-138.199.234.199:22-139.178.68.195:51562.service - OpenSSH per-connection server daemon (139.178.68.195:51562). Nov 8 00:11:02.573995 kubelet[2774]: E1108 00:11:02.572380 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:11:02.573995 kubelet[2774]: E1108 00:11:02.572758 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:11:02.573995 kubelet[2774]: E1108 00:11:02.573134 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:11:02.813393 sshd[5863]: Accepted publickey for core from 139.178.68.195 port 51562 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:02.815680 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:02.822591 systemd-logind[1565]: New session 8 of user core. Nov 8 00:11:02.826304 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:11:03.620664 sshd[5863]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:03.625573 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:11:03.625738 systemd[1]: sshd@8-138.199.234.199:22-139.178.68.195:51562.service: Deactivated successfully. Nov 8 00:11:03.631717 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:11:03.636352 systemd-logind[1565]: Removed session 8. Nov 8 00:11:05.578202 kubelet[2774]: E1108 00:11:05.577913 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:11:07.572706 kubelet[2774]: E1108 00:11:07.572530 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:11:08.869429 systemd[1]: Started sshd@9-138.199.234.199:22-139.178.68.195:55366.service - OpenSSH per-connection server daemon (139.178.68.195:55366). Nov 8 00:11:09.824931 sshd[5878]: Accepted publickey for core from 139.178.68.195 port 55366 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:09.827394 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:09.833508 systemd-logind[1565]: New session 9 of user core. Nov 8 00:11:09.842350 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:11:10.572711 sshd[5878]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:10.578570 kubelet[2774]: E1108 00:11:10.578517 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:11:10.580142 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:11:10.580271 systemd[1]: sshd@9-138.199.234.199:22-139.178.68.195:55366.service: Deactivated successfully. Nov 8 00:11:10.587063 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:11:10.590692 systemd-logind[1565]: Removed session 9. Nov 8 00:11:12.571301 kubelet[2774]: E1108 00:11:12.571238 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:11:14.571717 kubelet[2774]: E1108 00:11:14.571641 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:11:15.575569 kubelet[2774]: E1108 00:11:15.575505 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:11:15.576035 kubelet[2774]: E1108 00:11:15.575997 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:11:15.729421 systemd[1]: Started sshd@10-138.199.234.199:22-139.178.68.195:33064.service - OpenSSH per-connection server daemon (139.178.68.195:33064). Nov 8 00:11:16.576992 kubelet[2774]: E1108 00:11:16.574755 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:11:16.668583 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 33064 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:16.671547 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:16.677272 systemd-logind[1565]: New session 10 of user core. Nov 8 00:11:16.684372 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:11:17.434655 sshd[5914]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:17.441633 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:11:17.444384 systemd[1]: sshd@10-138.199.234.199:22-139.178.68.195:33064.service: Deactivated successfully. Nov 8 00:11:17.456540 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:11:17.459203 systemd-logind[1565]: Removed session 10. Nov 8 00:11:17.597402 systemd[1]: Started sshd@11-138.199.234.199:22-139.178.68.195:33076.service - OpenSSH per-connection server daemon (139.178.68.195:33076). Nov 8 00:11:18.525583 sshd[5929]: Accepted publickey for core from 139.178.68.195 port 33076 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:18.527500 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:18.536794 systemd-logind[1565]: New session 11 of user core. Nov 8 00:11:18.543730 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:11:19.379636 sshd[5929]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:19.387160 systemd[1]: sshd@11-138.199.234.199:22-139.178.68.195:33076.service: Deactivated successfully. Nov 8 00:11:19.395326 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:11:19.397485 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:11:19.401793 systemd-logind[1565]: Removed session 11. Nov 8 00:11:19.537523 systemd[1]: Started sshd@12-138.199.234.199:22-139.178.68.195:33078.service - OpenSSH per-connection server daemon (139.178.68.195:33078). Nov 8 00:11:20.487979 sshd[5944]: Accepted publickey for core from 139.178.68.195 port 33078 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:20.492127 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:20.502572 systemd-logind[1565]: New session 12 of user core. Nov 8 00:11:20.508401 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:11:21.306182 sshd[5944]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:21.315456 systemd[1]: sshd@12-138.199.234.199:22-139.178.68.195:33078.service: Deactivated successfully. Nov 8 00:11:21.324373 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:11:21.326830 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:11:21.328086 systemd-logind[1565]: Removed session 12. Nov 8 00:11:22.581137 kubelet[2774]: E1108 00:11:22.580930 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:11:23.576364 kubelet[2774]: E1108 00:11:23.576288 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:11:26.467267 systemd[1]: Started sshd@13-138.199.234.199:22-139.178.68.195:37146.service - OpenSSH per-connection server daemon (139.178.68.195:37146). Nov 8 00:11:27.407424 sshd[5961]: Accepted publickey for core from 139.178.68.195 port 37146 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:27.409779 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:27.417108 systemd-logind[1565]: New session 13 of user core. Nov 8 00:11:27.426769 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:11:27.573981 kubelet[2774]: E1108 00:11:27.573839 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:11:27.576973 kubelet[2774]: E1108 00:11:27.576274 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:11:28.174939 sshd[5961]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:28.181176 systemd[1]: sshd@13-138.199.234.199:22-139.178.68.195:37146.service: Deactivated successfully. Nov 8 00:11:28.185776 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:11:28.188075 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:11:28.190697 systemd-logind[1565]: Removed session 13. Nov 8 00:11:28.344446 systemd[1]: Started sshd@14-138.199.234.199:22-139.178.68.195:37152.service - OpenSSH per-connection server daemon (139.178.68.195:37152). Nov 8 00:11:28.571151 kubelet[2774]: E1108 00:11:28.571086 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:11:29.316374 sshd[5975]: Accepted publickey for core from 139.178.68.195 port 37152 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:29.318017 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:29.326745 systemd-logind[1565]: New session 14 of user core. Nov 8 00:11:29.332237 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:11:29.575381 kubelet[2774]: E1108 00:11:29.574877 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:11:30.228931 sshd[5975]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:30.232987 systemd[1]: sshd@14-138.199.234.199:22-139.178.68.195:37152.service: Deactivated successfully. Nov 8 00:11:30.239501 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:11:30.241733 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:11:30.243005 systemd-logind[1565]: Removed session 14. Nov 8 00:11:30.389125 systemd[1]: Started sshd@15-138.199.234.199:22-139.178.68.195:37168.service - OpenSSH per-connection server daemon (139.178.68.195:37168). Nov 8 00:11:30.571250 kubelet[2774]: E1108 00:11:30.570919 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:11:31.333000 sshd[5987]: Accepted publickey for core from 139.178.68.195 port 37168 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:31.335641 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:31.343509 systemd-logind[1565]: New session 15 of user core. Nov 8 00:11:31.353578 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:11:32.852172 sshd[5987]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:32.860325 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:11:32.861672 systemd[1]: sshd@15-138.199.234.199:22-139.178.68.195:37168.service: Deactivated successfully. Nov 8 00:11:32.871630 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:11:32.876583 systemd-logind[1565]: Removed session 15. Nov 8 00:11:33.009278 systemd[1]: Started sshd@16-138.199.234.199:22-139.178.68.195:37180.service - OpenSSH per-connection server daemon (139.178.68.195:37180). Nov 8 00:11:33.954301 sshd[6008]: Accepted publickey for core from 139.178.68.195 port 37180 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:33.959026 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:33.968661 systemd-logind[1565]: New session 16 of user core. Nov 8 00:11:33.975290 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:11:34.576018 kubelet[2774]: E1108 00:11:34.575877 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:11:34.939015 sshd[6008]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:34.948822 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:11:34.949555 systemd[1]: sshd@16-138.199.234.199:22-139.178.68.195:37180.service: Deactivated successfully. Nov 8 00:11:34.955292 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:11:34.957571 systemd-logind[1565]: Removed session 16. Nov 8 00:11:35.105033 systemd[1]: Started sshd@17-138.199.234.199:22-139.178.68.195:46596.service - OpenSSH per-connection server daemon (139.178.68.195:46596). Nov 8 00:11:35.576836 kubelet[2774]: E1108 00:11:35.574883 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:11:36.061025 sshd[6020]: Accepted publickey for core from 139.178.68.195 port 46596 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:36.063227 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:36.070356 systemd-logind[1565]: New session 17 of user core. Nov 8 00:11:36.078285 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:11:36.831062 sshd[6020]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:36.841333 systemd[1]: sshd@17-138.199.234.199:22-139.178.68.195:46596.service: Deactivated successfully. Nov 8 00:11:36.850049 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:11:36.854042 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:11:36.859390 systemd-logind[1565]: Removed session 17. Nov 8 00:11:41.572814 kubelet[2774]: E1108 00:11:41.572753 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:11:41.995529 systemd[1]: Started sshd@18-138.199.234.199:22-139.178.68.195:46602.service - OpenSSH per-connection server daemon (139.178.68.195:46602). Nov 8 00:11:42.572129 kubelet[2774]: E1108 00:11:42.571560 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:11:42.572985 kubelet[2774]: E1108 00:11:42.572578 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:11:42.572985 kubelet[2774]: E1108 00:11:42.572884 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:11:42.948852 sshd[6057]: Accepted publickey for core from 139.178.68.195 port 46602 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:42.951805 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:42.959368 systemd-logind[1565]: New session 18 of user core. Nov 8 00:11:42.967293 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:11:43.690144 sshd[6057]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:43.699341 systemd[1]: sshd@18-138.199.234.199:22-139.178.68.195:46602.service: Deactivated successfully. Nov 8 00:11:43.705727 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:11:43.706261 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:11:43.709560 systemd-logind[1565]: Removed session 18. Nov 8 00:11:44.570543 kubelet[2774]: E1108 00:11:44.570474 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:11:48.571967 kubelet[2774]: E1108 00:11:48.571591 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:11:48.855224 systemd[1]: Started sshd@19-138.199.234.199:22-139.178.68.195:52380.service - OpenSSH per-connection server daemon (139.178.68.195:52380). Nov 8 00:11:49.577856 kubelet[2774]: E1108 00:11:49.575740 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:11:49.808370 sshd[6073]: Accepted publickey for core from 139.178.68.195 port 52380 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:49.811676 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:49.821283 systemd-logind[1565]: New session 19 of user core. Nov 8 00:11:49.831321 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:11:50.636268 sshd[6073]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:50.642402 systemd[1]: sshd@19-138.199.234.199:22-139.178.68.195:52380.service: Deactivated successfully. Nov 8 00:11:50.646873 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:11:50.647805 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:11:50.650666 systemd-logind[1565]: Removed session 19. Nov 8 00:11:54.573133 kubelet[2774]: E1108 00:11:54.572636 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:11:54.574886 kubelet[2774]: E1108 00:11:54.574627 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:11:54.574886 kubelet[2774]: E1108 00:11:54.574840 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:11:55.571668 kubelet[2774]: E1108 00:11:55.570843 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:11:57.571735 kubelet[2774]: E1108 00:11:57.571626 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf6dfcdd-hptwz" podUID="02c3f902-7bf4-4824-923c-48ba4e1e389c" Nov 8 00:11:59.570361 kubelet[2774]: E1108 00:11:59.570200 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b5cccc46-bcg68" podUID="5c488e78-d3ba-4197-ab37-75734ccb9129" Nov 8 00:12:00.572143 kubelet[2774]: E1108 00:12:00.572078 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-579fd6c6df-bcwhq" podUID="3005ab6c-6899-4c7c-9cae-4c79f44757c6" Nov 8 00:12:05.470127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b-rootfs.mount: Deactivated successfully. Nov 8 00:12:05.479018 containerd[1590]: time="2025-11-08T00:12:05.478885604Z" level=info msg="shim disconnected" id=22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b namespace=k8s.io Nov 8 00:12:05.479018 containerd[1590]: time="2025-11-08T00:12:05.478948125Z" level=warning msg="cleaning up after shim disconnected" id=22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b namespace=k8s.io Nov 8 00:12:05.479018 containerd[1590]: time="2025-11-08T00:12:05.478988485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:12:05.601636 kubelet[2774]: I1108 00:12:05.601325 2774 scope.go:117] "RemoveContainer" containerID="22d499deb6b320e6e4cbc7fe11e59d60b381331629a2db14ab5b77c4d8b8ae2b" Nov 8 00:12:05.604986 containerd[1590]: time="2025-11-08T00:12:05.604669749Z" level=info msg="CreateContainer within sandbox \"9ad405189f461cb48538076de94ec7d30c1c6ce10464ec8950d94880f15c939b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:12:05.622421 containerd[1590]: time="2025-11-08T00:12:05.622356612Z" level=info msg="CreateContainer within sandbox \"9ad405189f461cb48538076de94ec7d30c1c6ce10464ec8950d94880f15c939b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d90646427e097c644de8764c0ee3ed9e52ed0613cd36631a0d19cf407dff8730\"" Nov 8 00:12:05.622992 containerd[1590]: time="2025-11-08T00:12:05.622937899Z" level=info msg="StartContainer for \"d90646427e097c644de8764c0ee3ed9e52ed0613cd36631a0d19cf407dff8730\"" Nov 8 00:12:05.692298 containerd[1590]: time="2025-11-08T00:12:05.692219692Z" level=info msg="StartContainer for \"d90646427e097c644de8764c0ee3ed9e52ed0613cd36631a0d19cf407dff8730\" returns successfully" Nov 8 00:12:05.760918 kubelet[2774]: I1108 00:12:05.760795 2774 status_manager.go:890] "Failed to get status for pod" podUID="d8e2914269ad590a0ec87be731362e28" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3f5a11d2fe" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45996->10.0.0.2:2379: read: connection timed out" Nov 8 00:12:05.768065 kubelet[2774]: E1108 00:12:05.760475 2774 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45872->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-l7kch.1875df7d6d443d92 calico-system 1800 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-l7kch,UID:72f7d776-6bd7-4d33-8b73-a5febd833bf0,APIVersion:v1,ResourceVersion:808,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-3f5a11d2fe,},FirstTimestamp:2025-11-08 00:09:18 +0000 UTC,LastTimestamp:2025-11-08 00:11:55.570791678 +0000 UTC m=+212.131733862,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-3f5a11d2fe,}" Nov 8 00:12:05.889997 kubelet[2774]: E1108 00:12:05.889932 2774 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46088->10.0.0.2:2379: read: connection timed out" Nov 8 00:12:06.473875 systemd[1]: run-containerd-runc-k8s.io-d90646427e097c644de8764c0ee3ed9e52ed0613cd36631a0d19cf407dff8730-runc.zP8rrc.mount: Deactivated successfully. Nov 8 00:12:06.630939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f-rootfs.mount: Deactivated successfully. Nov 8 00:12:06.637109 containerd[1590]: time="2025-11-08T00:12:06.637018162Z" level=info msg="shim disconnected" id=65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f namespace=k8s.io Nov 8 00:12:06.637109 containerd[1590]: time="2025-11-08T00:12:06.637083682Z" level=warning msg="cleaning up after shim disconnected" id=65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f namespace=k8s.io Nov 8 00:12:06.637109 containerd[1590]: time="2025-11-08T00:12:06.637094123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:12:06.651696 containerd[1590]: time="2025-11-08T00:12:06.651637510Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:12:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:12:07.571812 containerd[1590]: time="2025-11-08T00:12:07.571550043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:12:07.572397 kubelet[2774]: E1108 00:12:07.572300 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6n7x9" podUID="57f11a43-3690-45d9-8837-b8df56bb1a07" Nov 8 00:12:07.614874 kubelet[2774]: I1108 00:12:07.614245 2774 scope.go:117] "RemoveContainer" containerID="65603dedac08728cfc8b6a0b132fb226db5977d4d463fc7df247816940ed1c0f" Nov 8 00:12:07.616461 containerd[1590]: time="2025-11-08T00:12:07.616408952Z" level=info msg="CreateContainer within sandbox \"69c8518138f8cbb681bfc963efe0cce0cb54711789fb185de7e04cd0e83922c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:12:07.632334 containerd[1590]: time="2025-11-08T00:12:07.632203079Z" level=info msg="CreateContainer within sandbox \"69c8518138f8cbb681bfc963efe0cce0cb54711789fb185de7e04cd0e83922c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"33d5246ee0fc121f95a31440cb6cd45f1ae9cfbfce4b7c5d72dc6e32ee563786\"" Nov 8 00:12:07.632748 containerd[1590]: time="2025-11-08T00:12:07.632722566Z" level=info msg="StartContainer for \"33d5246ee0fc121f95a31440cb6cd45f1ae9cfbfce4b7c5d72dc6e32ee563786\"" Nov 8 00:12:07.663725 systemd[1]: run-containerd-runc-k8s.io-33d5246ee0fc121f95a31440cb6cd45f1ae9cfbfce4b7c5d72dc6e32ee563786-runc.73vtGj.mount: Deactivated successfully. Nov 8 00:12:07.707485 containerd[1590]: time="2025-11-08T00:12:07.707429025Z" level=info msg="StartContainer for \"33d5246ee0fc121f95a31440cb6cd45f1ae9cfbfce4b7c5d72dc6e32ee563786\" returns successfully" Nov 8 00:12:07.917397 containerd[1590]: time="2025-11-08T00:12:07.917134335Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:07.919405 containerd[1590]: time="2025-11-08T00:12:07.919315844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:12:07.919687 containerd[1590]: time="2025-11-08T00:12:07.919512966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:12:07.919812 kubelet[2774]: E1108 00:12:07.919716 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:12:07.919812 kubelet[2774]: E1108 00:12:07.919790 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:12:07.920107 kubelet[2774]: E1108 00:12:07.920010 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwxxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l7kch_calico-system(72f7d776-6bd7-4d33-8b73-a5febd833bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:07.921374 kubelet[2774]: E1108 00:12:07.921228 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l7kch" podUID="72f7d776-6bd7-4d33-8b73-a5febd833bf0" Nov 8 00:12:08.571681 containerd[1590]: time="2025-11-08T00:12:08.571194254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:12:08.920451 containerd[1590]: time="2025-11-08T00:12:08.920287799Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:08.921738 containerd[1590]: time="2025-11-08T00:12:08.921691458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:12:08.921828 containerd[1590]: time="2025-11-08T00:12:08.921797499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:12:08.921986 kubelet[2774]: E1108 00:12:08.921932 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:08.922565 kubelet[2774]: E1108 00:12:08.921999 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:08.922565 kubelet[2774]: E1108 00:12:08.922120 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhdmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-mj8jx_calico-apiserver(ffefce46-3638-4c95-bed3-200605f5f8d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:08.923395 kubelet[2774]: E1108 00:12:08.923351 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-mj8jx" podUID="ffefce46-3638-4c95-bed3-200605f5f8d9" Nov 8 00:12:09.574211 containerd[1590]: time="2025-11-08T00:12:09.573947075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:12:09.916176 containerd[1590]: time="2025-11-08T00:12:09.915944049Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:09.917744 containerd[1590]: time="2025-11-08T00:12:09.917674633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:12:09.917867 containerd[1590]: time="2025-11-08T00:12:09.917833195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:12:09.918169 kubelet[2774]: E1108 00:12:09.918114 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:09.918246 kubelet[2774]: E1108 00:12:09.918191 2774 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:09.918432 kubelet[2774]: E1108 00:12:09.918361 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29lz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5874598-gtq72_calico-apiserver(7ec2eb36-2470-4386-96a3-fe6dd8fc602f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:09.919623 kubelet[2774]: E1108 00:12:09.919569 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5874598-gtq72" podUID="7ec2eb36-2470-4386-96a3-fe6dd8fc602f" Nov 8 00:12:11.305411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b-rootfs.mount: Deactivated successfully. Nov 8 00:12:11.315934 containerd[1590]: time="2025-11-08T00:12:11.315843374Z" level=info msg="shim disconnected" id=cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b namespace=k8s.io Nov 8 00:12:11.315934 containerd[1590]: time="2025-11-08T00:12:11.315932375Z" level=warning msg="cleaning up after shim disconnected" id=cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b namespace=k8s.io Nov 8 00:12:11.315934 containerd[1590]: time="2025-11-08T00:12:11.315949655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:12:11.571268 containerd[1590]: time="2025-11-08T00:12:11.570523802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:12:11.629340 kubelet[2774]: I1108 00:12:11.629312 2774 scope.go:117] "RemoveContainer" containerID="cba45c54733e1cf323416c4deb02dd8b04efe37afaf6face9ff22479aee5b06b" Nov 8 00:12:11.631594 containerd[1590]: time="2025-11-08T00:12:11.631555502Z" level=info msg="CreateContainer within sandbox \"4a930222650e9866d7fe9ba427151a20a5f8eb13bec9878a4c12bbfb732701df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:12:11.649617 containerd[1590]: time="2025-11-08T00:12:11.649554436Z" level=info msg="CreateContainer within sandbox \"4a930222650e9866d7fe9ba427151a20a5f8eb13bec9878a4c12bbfb732701df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bfc809b8936f6d97ead34732cbdbebae6ff19a85a6dd73a1096d2f27dc49169b\"" Nov 8 00:12:11.650209 containerd[1590]: time="2025-11-08T00:12:11.650176045Z" level=info msg="StartContainer for \"bfc809b8936f6d97ead34732cbdbebae6ff19a85a6dd73a1096d2f27dc49169b\"" Nov 8 00:12:11.715075 containerd[1590]: time="2025-11-08T00:12:11.713807701Z" level=info msg="StartContainer for \"bfc809b8936f6d97ead34732cbdbebae6ff19a85a6dd73a1096d2f27dc49169b\" returns successfully" Nov 8 00:12:11.934426 containerd[1590]: time="2025-11-08T00:12:11.934019084Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io