Nov 8 00:01:47.897022 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:01:47.897057 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:01:47.897071 kernel: KASLR enabled Nov 8 00:01:47.897077 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 8 00:01:47.897083 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Nov 8 00:01:47.897088 kernel: random: crng init done Nov 8 00:01:47.897095 kernel: ACPI: Early table checksum verification disabled Nov 8 00:01:47.897101 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 8 00:01:47.897108 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:01:47.897118 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897125 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897131 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897137 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897143 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897151 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897159 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897165 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897172 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:47.897178 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:01:47.897185 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 8 00:01:47.897191 kernel: NUMA: Failed to initialise from firmware Nov 8 00:01:47.897197 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 8 00:01:47.897204 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Nov 8 00:01:47.897210 kernel: Zone ranges: Nov 8 00:01:47.897216 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 8 00:01:47.897224 kernel: DMA32 empty Nov 8 00:01:47.897231 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 8 00:01:47.897237 kernel: Movable zone start for each node Nov 8 00:01:47.897244 kernel: Early memory node ranges Nov 8 00:01:47.897250 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Nov 8 00:01:47.897257 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 8 00:01:47.897263 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 8 00:01:47.897270 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 8 00:01:47.897277 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 8 00:01:47.897283 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 8 00:01:47.897289 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 8 00:01:47.897296 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 8 00:01:47.897304 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 8 00:01:47.897311 kernel: psci: probing for conduit method from ACPI. Nov 8 00:01:47.897317 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:01:47.897327 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:01:47.897334 kernel: psci: Trusted OS migration not required Nov 8 00:01:47.897340 kernel: psci: SMC Calling Convention v1.1 Nov 8 00:01:47.897349 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 8 00:01:47.897356 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:01:47.897363 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:01:47.897370 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:01:47.897377 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:01:47.897384 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:01:47.897391 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:01:47.897397 kernel: CPU features: detected: Spectre-v4 Nov 8 00:01:47.897404 kernel: CPU features: detected: Spectre-BHB Nov 8 00:01:47.897411 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:01:47.897420 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:01:47.897427 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:01:47.897434 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:01:47.897440 kernel: alternatives: applying boot alternatives Nov 8 00:01:47.897448 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:01:47.897456 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:01:47.897463 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:01:47.897469 kernel: Fallback order for Node 0: 0 Nov 8 00:01:47.897476 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Nov 8 00:01:47.897483 kernel: Policy zone: Normal Nov 8 00:01:47.897490 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:01:47.897498 kernel: software IO TLB: area num 2. Nov 8 00:01:47.897505 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Nov 8 00:01:47.897513 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Nov 8 00:01:47.897519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:01:47.897526 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:01:47.897534 kernel: rcu: RCU event tracing is enabled. Nov 8 00:01:47.897541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:01:47.897548 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:01:47.897555 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:01:47.897562 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:01:47.897569 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:01:47.897575 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:01:47.897584 kernel: GICv3: 256 SPIs implemented Nov 8 00:01:47.897591 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:01:47.897598 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:01:47.897604 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 8 00:01:47.897611 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 8 00:01:47.897628 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 8 00:01:47.897635 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Nov 8 00:01:47.897642 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Nov 8 00:01:47.897649 kernel: GICv3: using LPI property table @0x00000001000e0000 Nov 8 00:01:47.897656 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Nov 8 00:01:47.897663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:01:47.897673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:01:47.897680 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:01:47.897687 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:01:47.897695 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:01:47.897702 kernel: Console: colour dummy device 80x25 Nov 8 00:01:47.897709 kernel: ACPI: Core revision 20230628 Nov 8 00:01:47.897717 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:01:47.897724 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:01:47.897731 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:01:47.897738 kernel: landlock: Up and running. Nov 8 00:01:47.897746 kernel: SELinux: Initializing. Nov 8 00:01:47.897753 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:01:47.897761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:01:47.897768 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:01:47.897775 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:01:47.897782 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:01:47.897790 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:01:47.897797 kernel: Platform MSI: ITS@0x8080000 domain created Nov 8 00:01:47.897804 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 8 00:01:47.897813 kernel: Remapping and enabling EFI services. Nov 8 00:01:47.897820 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:01:47.897827 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:01:47.897834 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 8 00:01:47.897842 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Nov 8 00:01:47.897849 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:01:47.897856 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:01:47.897863 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:01:47.897870 kernel: SMP: Total of 2 processors activated. Nov 8 00:01:47.897878 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:01:47.897886 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:01:47.897894 kernel: CPU features: detected: Common not Private translations Nov 8 00:01:47.897907 kernel: CPU features: detected: CRC32 instructions Nov 8 00:01:47.897916 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 8 00:01:47.897924 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:01:47.897931 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:01:47.897949 kernel: CPU features: detected: Privileged Access Never Nov 8 00:01:47.897964 kernel: CPU features: detected: RAS Extension Support Nov 8 00:01:47.897977 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 8 00:01:47.897985 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:01:47.897993 kernel: alternatives: applying system-wide alternatives Nov 8 00:01:47.898000 kernel: devtmpfs: initialized Nov 8 00:01:47.898008 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:01:47.898015 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:01:47.898023 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:01:47.898030 kernel: SMBIOS 3.0.0 present. Nov 8 00:01:47.898047 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 8 00:01:47.898054 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:01:47.898062 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:01:47.898070 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:01:47.898077 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:01:47.898085 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:01:47.898092 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Nov 8 00:01:47.898099 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:01:47.898107 kernel: cpuidle: using governor menu Nov 8 00:01:47.898116 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:01:47.898124 kernel: ASID allocator initialised with 32768 entries Nov 8 00:01:47.898131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:01:47.898138 kernel: Serial: AMBA PL011 UART driver Nov 8 00:01:47.898146 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:01:47.898155 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:01:47.898164 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:01:47.898171 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:01:47.898180 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:01:47.898190 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:01:47.898197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:01:47.898207 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:01:47.898215 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:01:47.898224 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:01:47.898232 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:01:47.898240 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:01:47.898247 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:01:47.898257 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:01:47.898267 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:01:47.898276 kernel: ACPI: Interpreter enabled Nov 8 00:01:47.898284 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:01:47.898291 kernel: ACPI: MCFG table detected, 1 entries Nov 8 00:01:47.898299 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:01:47.898308 kernel: printk: console [ttyAMA0] enabled Nov 8 00:01:47.898317 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:01:47.898518 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:01:47.898602 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:01:47.898681 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:01:47.898750 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 8 00:01:47.898814 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 8 00:01:47.898824 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 8 00:01:47.898831 kernel: PCI host bridge to bus 0000:00 Nov 8 00:01:47.898902 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 8 00:01:47.901926 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 8 00:01:47.902068 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 8 00:01:47.902130 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:01:47.902220 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 8 00:01:47.902301 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Nov 8 00:01:47.902370 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Nov 8 00:01:47.902437 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Nov 8 00:01:47.902518 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.902601 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Nov 8 00:01:47.902714 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.902789 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Nov 8 00:01:47.902863 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.902995 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Nov 8 00:01:47.903079 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.903144 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Nov 8 00:01:47.903216 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.903280 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Nov 8 00:01:47.903351 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.903417 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Nov 8 00:01:47.903492 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.903564 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Nov 8 00:01:47.903706 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.903787 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Nov 8 00:01:47.903862 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:01:47.903928 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Nov 8 00:01:47.905120 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Nov 8 00:01:47.905193 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Nov 8 00:01:47.906023 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:01:47.906130 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Nov 8 00:01:47.906199 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:01:47.906267 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Nov 8 00:01:47.906343 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:01:47.906416 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Nov 8 00:01:47.906491 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 8 00:01:47.906560 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Nov 8 00:01:47.906683 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Nov 8 00:01:47.906771 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 8 00:01:47.906843 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Nov 8 00:01:47.906925 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:01:47.907033 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Nov 8 00:01:47.907105 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Nov 8 00:01:47.907182 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 8 00:01:47.907262 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Nov 8 00:01:47.907332 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Nov 8 00:01:47.907429 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:01:47.907504 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Nov 8 00:01:47.907581 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Nov 8 00:01:47.907674 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Nov 8 00:01:47.907762 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 8 00:01:47.907834 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 8 00:01:47.907903 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 8 00:01:47.908564 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 8 00:01:47.908720 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 8 00:01:47.908799 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 8 00:01:47.908883 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 8 00:01:47.909058 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 8 00:01:47.909153 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 8 00:01:47.909227 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 8 00:01:47.909300 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 8 00:01:47.909378 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 8 00:01:47.909461 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 8 00:01:47.909527 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 8 00:01:47.909603 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 8 00:01:47.909698 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:01:47.909785 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 8 00:01:47.909853 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 8 00:01:47.909989 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:01:47.910082 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 8 00:01:47.910163 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 8 00:01:47.910252 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:01:47.910328 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 8 00:01:47.910436 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 8 00:01:47.910515 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:01:47.910581 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 8 00:01:47.910688 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 8 00:01:47.910759 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Nov 8 00:01:47.910835 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:01:47.910910 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Nov 8 00:01:47.911047 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:01:47.911125 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Nov 8 00:01:47.911200 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:01:47.911280 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Nov 8 00:01:47.911365 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:01:47.911440 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Nov 8 00:01:47.911504 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:01:47.911590 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Nov 8 00:01:47.911723 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:01:47.911809 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Nov 8 00:01:47.911888 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:01:47.913431 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Nov 8 00:01:47.913524 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:01:47.913601 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Nov 8 00:01:47.913730 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:01:47.913815 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Nov 8 00:01:47.913889 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Nov 8 00:01:47.914021 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Nov 8 00:01:47.914104 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:01:47.914170 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Nov 8 00:01:47.914244 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:01:47.914329 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Nov 8 00:01:47.914412 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:01:47.914479 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Nov 8 00:01:47.914558 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 8 00:01:47.916136 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Nov 8 00:01:47.916217 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 8 00:01:47.916297 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Nov 8 00:01:47.916369 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 8 00:01:47.916454 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Nov 8 00:01:47.916521 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 8 00:01:47.916597 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Nov 8 00:01:47.916712 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 8 00:01:47.916791 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Nov 8 00:01:47.916868 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Nov 8 00:01:47.917018 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Nov 8 00:01:47.917106 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Nov 8 00:01:47.917189 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:01:47.917266 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Nov 8 00:01:47.917341 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:01:47.917417 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 8 00:01:47.917496 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 8 00:01:47.917562 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:01:47.917659 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Nov 8 00:01:47.917752 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:01:47.917820 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 8 00:01:47.917897 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 8 00:01:47.919090 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:01:47.919195 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Nov 8 00:01:47.919276 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Nov 8 00:01:47.919360 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:01:47.919432 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 8 00:01:47.919510 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 8 00:01:47.919590 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:01:47.919694 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Nov 8 00:01:47.919772 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:01:47.919857 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 8 00:01:47.919925 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 8 00:01:47.921105 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:01:47.921207 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Nov 8 00:01:47.921291 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Nov 8 00:01:47.921359 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:01:47.921439 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 8 00:01:47.921512 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 8 00:01:47.921586 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:01:47.921734 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Nov 8 00:01:47.921814 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Nov 8 00:01:47.921883 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:01:47.922959 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 8 00:01:47.923059 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 8 00:01:47.923136 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:01:47.923220 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Nov 8 00:01:47.923302 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Nov 8 00:01:47.923376 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Nov 8 00:01:47.923448 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:01:47.923529 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 8 00:01:47.923602 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 8 00:01:47.923725 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:01:47.923829 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:01:47.923906 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 8 00:01:47.925065 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 8 00:01:47.925145 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:01:47.925215 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:01:47.925281 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 8 00:01:47.925350 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 8 00:01:47.925415 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:01:47.925481 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 8 00:01:47.925539 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 8 00:01:47.925597 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 8 00:01:47.925701 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 8 00:01:47.925767 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 8 00:01:47.925833 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:01:47.925901 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 8 00:01:47.926349 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 8 00:01:47.926425 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:01:47.926498 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 8 00:01:47.926560 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 8 00:01:47.926680 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:01:47.926763 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 8 00:01:47.926828 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 8 00:01:47.926920 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:01:47.928109 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 8 00:01:47.928182 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 8 00:01:47.928243 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:01:47.928318 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 8 00:01:47.928378 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 8 00:01:47.928441 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:01:47.928512 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 8 00:01:47.928576 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 8 00:01:47.928693 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:01:47.928768 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 8 00:01:47.928830 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 8 00:01:47.928891 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:01:47.929086 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 8 00:01:47.929152 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 8 00:01:47.929217 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:01:47.929227 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 8 00:01:47.929235 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 8 00:01:47.929243 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 8 00:01:47.929251 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 8 00:01:47.929259 kernel: iommu: Default domain type: Translated Nov 8 00:01:47.929267 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:01:47.929275 kernel: efivars: Registered efivars operations Nov 8 00:01:47.929283 kernel: vgaarb: loaded Nov 8 00:01:47.929293 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:01:47.929302 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:01:47.929310 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:01:47.929318 kernel: pnp: PnP ACPI init Nov 8 00:01:47.929397 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 8 00:01:47.929409 kernel: pnp: PnP ACPI: found 1 devices Nov 8 00:01:47.929417 kernel: NET: Registered PF_INET protocol family Nov 8 00:01:47.929425 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:01:47.929435 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:01:47.929444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:01:47.929452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:01:47.929460 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:01:47.929468 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:01:47.929476 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:01:47.929484 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:01:47.929492 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:01:47.929567 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 8 00:01:47.929581 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:01:47.929589 kernel: kvm [1]: HYP mode not available Nov 8 00:01:47.929597 kernel: Initialise system trusted keyrings Nov 8 00:01:47.929605 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:01:47.929613 kernel: Key type asymmetric registered Nov 8 00:01:47.929636 kernel: Asymmetric key parser 'x509' registered Nov 8 00:01:47.929644 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:01:47.929653 kernel: io scheduler mq-deadline registered Nov 8 00:01:47.929660 kernel: io scheduler kyber registered Nov 8 00:01:47.929671 kernel: io scheduler bfq registered Nov 8 00:01:47.929679 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 8 00:01:47.929757 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 8 00:01:47.929825 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 8 00:01:47.929892 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.931067 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 8 00:01:47.931154 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 8 00:01:47.931229 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.931301 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 8 00:01:47.931369 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 8 00:01:47.931435 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.931504 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 8 00:01:47.931571 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 8 00:01:47.931665 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.931741 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 8 00:01:47.931809 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 8 00:01:47.931876 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.933072 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 8 00:01:47.933177 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 8 00:01:47.933252 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.933322 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 8 00:01:47.933388 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 8 00:01:47.933453 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.933523 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 8 00:01:47.933593 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 8 00:01:47.933705 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.933719 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 8 00:01:47.933793 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 8 00:01:47.933861 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 8 00:01:47.933926 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:01:47.933995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 8 00:01:47.934006 kernel: ACPI: button: Power Button [PWRB] Nov 8 00:01:47.934015 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 8 00:01:47.934104 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 8 00:01:47.934179 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 8 00:01:47.934192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:01:47.934200 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 8 00:01:47.934269 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 8 00:01:47.934279 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 8 00:01:47.934288 kernel: thunder_xcv, ver 1.0 Nov 8 00:01:47.934299 kernel: thunder_bgx, ver 1.0 Nov 8 00:01:47.934306 kernel: nicpf, ver 1.0 Nov 8 00:01:47.934315 kernel: nicvf, ver 1.0 Nov 8 00:01:47.934394 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:01:47.934470 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:01:47 UTC (1762560107) Nov 8 00:01:47.934482 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:01:47.934490 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 8 00:01:47.934499 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:01:47.934509 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:01:47.934517 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:01:47.934525 kernel: Segment Routing with IPv6 Nov 8 00:01:47.934533 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:01:47.934541 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:01:47.934549 kernel: Key type dns_resolver registered Nov 8 00:01:47.934557 kernel: registered taskstats version 1 Nov 8 00:01:47.934565 kernel: Loading compiled-in X.509 certificates Nov 8 00:01:47.934573 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:01:47.934583 kernel: Key type .fscrypt registered Nov 8 00:01:47.934590 kernel: Key type fscrypt-provisioning registered Nov 8 00:01:47.934598 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:01:47.934606 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:01:47.934614 kernel: ima: No architecture policies found Nov 8 00:01:47.934632 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:01:47.934641 kernel: clk: Disabling unused clocks Nov 8 00:01:47.934648 kernel: Freeing unused kernel memory: 39424K Nov 8 00:01:47.934656 kernel: Run /init as init process Nov 8 00:01:47.934666 kernel: with arguments: Nov 8 00:01:47.934675 kernel: /init Nov 8 00:01:47.934682 kernel: with environment: Nov 8 00:01:47.934690 kernel: HOME=/ Nov 8 00:01:47.934698 kernel: TERM=linux Nov 8 00:01:47.934708 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:01:47.934719 systemd[1]: Detected virtualization kvm. Nov 8 00:01:47.934727 systemd[1]: Detected architecture arm64. Nov 8 00:01:47.934737 systemd[1]: Running in initrd. Nov 8 00:01:47.934745 systemd[1]: No hostname configured, using default hostname. Nov 8 00:01:47.934753 systemd[1]: Hostname set to . Nov 8 00:01:47.934762 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:01:47.934770 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:01:47.934779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:01:47.934788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:01:47.934797 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:01:47.934807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:01:47.934815 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:01:47.934824 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:01:47.934834 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:01:47.934843 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:01:47.934851 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:01:47.934860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:01:47.934871 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:01:47.934880 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:01:47.934888 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:01:47.934897 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:01:47.934905 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:01:47.934913 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:01:47.934922 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:01:47.934930 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:01:47.936008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:01:47.936018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:01:47.936027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:01:47.936046 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:01:47.936056 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:01:47.936064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:01:47.936073 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:01:47.936081 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:01:47.936090 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:01:47.936102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:01:47.936111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:47.936119 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:01:47.936128 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:01:47.936136 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:01:47.936177 systemd-journald[236]: Collecting audit messages is disabled. Nov 8 00:01:47.936203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:01:47.936213 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:01:47.936223 kernel: Bridge firewalling registered Nov 8 00:01:47.936232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:47.936241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:01:47.936250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:01:47.936259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:01:47.936267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:01:47.936277 systemd-journald[236]: Journal started Nov 8 00:01:47.936298 systemd-journald[236]: Runtime Journal (/run/log/journal/6ae0e36f67a54c0a9a7ec30e1bc8e4be) is 8.0M, max 76.6M, 68.6M free. Nov 8 00:01:47.889686 systemd-modules-load[237]: Inserted module 'overlay' Nov 8 00:01:47.904755 systemd-modules-load[237]: Inserted module 'br_netfilter' Nov 8 00:01:47.940224 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:01:47.940249 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:01:47.952219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:01:47.956057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:47.956929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:01:47.959288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:01:47.982257 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:01:47.984605 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:01:47.996145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:01:47.998740 dracut-cmdline[272]: dracut-dracut-053 Nov 8 00:01:47.999956 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:01:48.032424 systemd-resolved[276]: Positive Trust Anchors: Nov 8 00:01:48.032447 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:01:48.032488 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:01:48.038450 systemd-resolved[276]: Defaulting to hostname 'linux'. Nov 8 00:01:48.041077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:01:48.042322 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:01:48.106994 kernel: SCSI subsystem initialized Nov 8 00:01:48.111975 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:01:48.119997 kernel: iscsi: registered transport (tcp) Nov 8 00:01:48.133240 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:01:48.133297 kernel: QLogic iSCSI HBA Driver Nov 8 00:01:48.182563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:01:48.190199 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:01:48.209089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:01:48.209737 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:01:48.209752 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:01:48.259017 kernel: raid6: neonx8 gen() 15673 MB/s Nov 8 00:01:48.276016 kernel: raid6: neonx4 gen() 15555 MB/s Nov 8 00:01:48.293031 kernel: raid6: neonx2 gen() 13171 MB/s Nov 8 00:01:48.309996 kernel: raid6: neonx1 gen() 10437 MB/s Nov 8 00:01:48.326985 kernel: raid6: int64x8 gen() 6919 MB/s Nov 8 00:01:48.344009 kernel: raid6: int64x4 gen() 7322 MB/s Nov 8 00:01:48.360993 kernel: raid6: int64x2 gen() 6104 MB/s Nov 8 00:01:48.378022 kernel: raid6: int64x1 gen() 5036 MB/s Nov 8 00:01:48.378112 kernel: raid6: using algorithm neonx8 gen() 15673 MB/s Nov 8 00:01:48.394993 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Nov 8 00:01:48.395069 kernel: raid6: using neon recovery algorithm Nov 8 00:01:48.400204 kernel: xor: measuring software checksum speed Nov 8 00:01:48.400275 kernel: 8regs : 19778 MB/sec Nov 8 00:01:48.400295 kernel: 32regs : 19688 MB/sec Nov 8 00:01:48.401001 kernel: arm64_neon : 27016 MB/sec Nov 8 00:01:48.401033 kernel: xor: using function: arm64_neon (27016 MB/sec) Nov 8 00:01:48.451018 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:01:48.467608 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:01:48.474206 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:01:48.487874 systemd-udevd[456]: Using default interface naming scheme 'v255'. Nov 8 00:01:48.491429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:01:48.499156 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:01:48.515012 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Nov 8 00:01:48.551099 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:01:48.557228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:01:48.612202 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:01:48.619141 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:01:48.647539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:01:48.648682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:01:48.649949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:01:48.650526 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:01:48.658522 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:01:48.673536 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:01:48.743967 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:01:48.748386 kernel: ACPI: bus type USB registered Nov 8 00:01:48.748452 kernel: usbcore: registered new interface driver usbfs Nov 8 00:01:48.748463 kernel: usbcore: registered new interface driver hub Nov 8 00:01:48.748473 kernel: usbcore: registered new device driver usb Nov 8 00:01:48.750595 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:01:48.750665 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:01:48.754188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:01:48.754322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:01:48.758193 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:01:48.758753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:01:48.759501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:48.762439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:48.777341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:48.784965 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:01:48.785175 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:01:48.786052 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:01:48.791284 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:01:48.791487 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:01:48.793067 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:01:48.794957 kernel: hub 1-0:1.0: USB hub found Nov 8 00:01:48.795145 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:01:48.794991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:48.797150 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:01:48.797311 kernel: hub 2-0:1.0: USB hub found Nov 8 00:01:48.797991 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:01:48.803145 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:01:48.805498 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 8 00:01:48.809701 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 8 00:01:48.809884 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:01:48.811996 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:01:48.820974 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 8 00:01:48.822002 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 8 00:01:48.822102 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 8 00:01:48.822187 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 8 00:01:48.823682 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:01:48.828307 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:01:48.828362 kernel: GPT:17805311 != 80003071 Nov 8 00:01:48.828372 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:01:48.831336 kernel: GPT:17805311 != 80003071 Nov 8 00:01:48.831383 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:01:48.831395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:01:48.831406 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 8 00:01:48.843115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:01:48.875968 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (515) Nov 8 00:01:48.881689 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:01:48.886967 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (511) Nov 8 00:01:48.893378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:01:48.902966 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:01:48.912243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:01:48.912927 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:01:48.929172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:01:48.936113 disk-uuid[574]: Primary Header is updated. Nov 8 00:01:48.936113 disk-uuid[574]: Secondary Entries is updated. Nov 8 00:01:48.936113 disk-uuid[574]: Secondary Header is updated. Nov 8 00:01:48.942967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:01:48.946992 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:01:48.952014 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:01:49.032959 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:01:49.168721 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 8 00:01:49.168798 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 8 00:01:49.169074 kernel: usbcore: registered new interface driver usbhid Nov 8 00:01:49.169962 kernel: usbhid: USB HID core driver Nov 8 00:01:49.274987 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 8 00:01:49.406149 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 8 00:01:49.459991 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 8 00:01:49.955011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:01:49.955561 disk-uuid[575]: The operation has completed successfully. Nov 8 00:01:50.012188 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:01:50.012289 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:01:50.022146 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:01:50.028045 sh[592]: Success Nov 8 00:01:50.043323 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:01:50.100667 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:01:50.105338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:01:50.107707 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:01:50.125258 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:01:50.125324 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:50.125341 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:01:50.125366 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:01:50.125382 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:01:50.132992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:01:50.134707 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:01:50.137567 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:01:50.147230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:01:50.150130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:01:50.163769 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:01:50.163844 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:50.163859 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:01:50.168989 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:01:50.169049 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:01:50.180166 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:01:50.180466 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:01:50.187971 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:01:50.196258 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:01:50.269247 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:01:50.279319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:01:50.292530 ignition[696]: Ignition 2.19.0 Nov 8 00:01:50.292538 ignition[696]: Stage: fetch-offline Nov 8 00:01:50.292575 ignition[696]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:50.292584 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:50.292749 ignition[696]: parsed url from cmdline: "" Nov 8 00:01:50.292752 ignition[696]: no config URL provided Nov 8 00:01:50.292757 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:01:50.292764 ignition[696]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:01:50.297256 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:01:50.292769 ignition[696]: failed to fetch config: resource requires networking Nov 8 00:01:50.292957 ignition[696]: Ignition finished successfully Nov 8 00:01:50.305231 systemd-networkd[778]: lo: Link UP Nov 8 00:01:50.305246 systemd-networkd[778]: lo: Gained carrier Nov 8 00:01:50.307352 systemd-networkd[778]: Enumeration completed Nov 8 00:01:50.307630 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:01:50.308681 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:50.308684 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:01:50.309158 systemd[1]: Reached target network.target - Network. Nov 8 00:01:50.310849 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:50.310852 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:01:50.312240 systemd-networkd[778]: eth0: Link UP Nov 8 00:01:50.312244 systemd-networkd[778]: eth0: Gained carrier Nov 8 00:01:50.312252 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:50.318415 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:01:50.320654 systemd-networkd[778]: eth1: Link UP Nov 8 00:01:50.320661 systemd-networkd[778]: eth1: Gained carrier Nov 8 00:01:50.320677 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:50.336543 ignition[781]: Ignition 2.19.0 Nov 8 00:01:50.336555 ignition[781]: Stage: fetch Nov 8 00:01:50.336748 ignition[781]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:50.336758 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:50.336863 ignition[781]: parsed url from cmdline: "" Nov 8 00:01:50.336866 ignition[781]: no config URL provided Nov 8 00:01:50.336879 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:01:50.336888 ignition[781]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:01:50.336908 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 8 00:01:50.337544 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:01:50.356024 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:01:50.390056 systemd-networkd[778]: eth0: DHCPv4 address 46.224.42.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:01:50.538499 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 8 00:01:50.548006 ignition[781]: GET result: OK Nov 8 00:01:50.548174 ignition[781]: parsing config with SHA512: ef2ae3b191739257ee598bbad20cc7c940f191d21ac0e6d7ac9fa12fbb686cd1a1c46a6b79d43bf777de6c13dadafab271d75769158356b70dc1ec4316f008ad Nov 8 00:01:50.553440 unknown[781]: fetched base config from "system" Nov 8 00:01:50.553450 unknown[781]: fetched base config from "system" Nov 8 00:01:50.554308 ignition[781]: fetch: fetch complete Nov 8 00:01:50.553457 unknown[781]: fetched user config from "hetzner" Nov 8 00:01:50.554314 ignition[781]: fetch: fetch passed Nov 8 00:01:50.556259 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:01:50.554365 ignition[781]: Ignition finished successfully Nov 8 00:01:50.568239 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:01:50.585772 ignition[789]: Ignition 2.19.0 Nov 8 00:01:50.585795 ignition[789]: Stage: kargs Nov 8 00:01:50.586259 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:50.586331 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:50.588900 ignition[789]: kargs: kargs passed Nov 8 00:01:50.589050 ignition[789]: Ignition finished successfully Nov 8 00:01:50.590896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:01:50.595129 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:01:50.610982 ignition[795]: Ignition 2.19.0 Nov 8 00:01:50.611591 ignition[795]: Stage: disks Nov 8 00:01:50.611804 ignition[795]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:50.611815 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:50.612865 ignition[795]: disks: disks passed Nov 8 00:01:50.612920 ignition[795]: Ignition finished successfully Nov 8 00:01:50.617570 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:01:50.618286 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:01:50.619864 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:01:50.621938 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:01:50.623511 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:01:50.624734 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:01:50.631189 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:01:50.646563 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:01:50.653391 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:01:50.660063 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:01:50.701020 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:01:50.701069 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:01:50.702645 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:01:50.714170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:01:50.718101 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:01:50.721265 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:01:50.723061 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:01:50.723094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:01:50.733643 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Nov 8 00:01:50.733691 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:01:50.734295 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:50.734944 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:01:50.737624 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:01:50.743179 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:01:50.753508 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:01:50.753589 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:01:50.760136 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:01:50.788982 coreos-metadata[813]: Nov 08 00:01:50.788 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 8 00:01:50.790810 coreos-metadata[813]: Nov 08 00:01:50.790 INFO Fetch successful Nov 8 00:01:50.792279 coreos-metadata[813]: Nov 08 00:01:50.792 INFO wrote hostname ci-4081-3-6-n-8957f209ae to /sysroot/etc/hostname Nov 8 00:01:50.795512 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:01:50.798809 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:01:50.804127 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:01:50.809078 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:01:50.813750 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:01:50.911368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:01:50.918066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:01:50.921820 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:01:50.932020 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:01:50.956668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:01:50.964965 ignition[928]: INFO : Ignition 2.19.0 Nov 8 00:01:50.964965 ignition[928]: INFO : Stage: mount Nov 8 00:01:50.964965 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:50.964965 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:50.969201 ignition[928]: INFO : mount: mount passed Nov 8 00:01:50.969201 ignition[928]: INFO : Ignition finished successfully Nov 8 00:01:50.969315 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:01:50.979132 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:01:51.125198 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:01:51.140294 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:01:51.151983 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Nov 8 00:01:51.153566 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:01:51.153636 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:51.153676 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:01:51.157959 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:01:51.158034 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:01:51.160971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:01:51.184377 ignition[957]: INFO : Ignition 2.19.0 Nov 8 00:01:51.184377 ignition[957]: INFO : Stage: files Nov 8 00:01:51.187856 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:51.187856 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:51.187856 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:01:51.189994 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:01:51.189994 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:01:51.194533 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:01:51.195890 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:01:51.197592 unknown[957]: wrote ssh authorized keys file for user: core Nov 8 00:01:51.199006 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:01:51.200499 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:01:51.201776 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 8 00:01:51.282019 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:01:51.356120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:01:51.357222 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:01:51.367573 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:01:51.367573 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:01:51.367573 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:01:51.367573 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 8 00:01:51.566155 systemd-networkd[778]: eth1: Gained IPv6LL Nov 8 00:01:51.630544 systemd-networkd[778]: eth0: Gained IPv6LL Nov 8 00:01:51.652395 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:01:52.228179 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:01:52.228179 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:01:52.234118 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:01:52.234118 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:01:52.234118 ignition[957]: INFO : files: files passed Nov 8 00:01:52.234118 ignition[957]: INFO : Ignition finished successfully Nov 8 00:01:52.234931 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:01:52.242197 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:01:52.246678 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:01:52.252214 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:01:52.252323 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:01:52.262126 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:01:52.262126 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:01:52.265529 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:01:52.268380 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:01:52.269562 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:01:52.276170 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:01:52.308557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:01:52.308760 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:01:52.310647 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:01:52.311954 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:01:52.312569 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:01:52.318110 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:01:52.330214 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:01:52.336165 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:01:52.351910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:01:52.353088 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:01:52.354651 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:01:52.355691 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:01:52.355816 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:01:52.357330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:01:52.358635 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:01:52.359650 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:01:52.360564 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:01:52.361677 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:01:52.362749 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:01:52.363798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:01:52.364882 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:01:52.366078 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:01:52.367110 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:01:52.367972 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:01:52.368095 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:01:52.369377 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:01:52.370051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:01:52.371164 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:01:52.371678 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:01:52.372413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:01:52.372529 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:01:52.374146 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:01:52.374281 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:01:52.375794 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:01:52.375892 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:01:52.377321 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:01:52.377414 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:01:52.388072 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:01:52.390776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:01:52.390964 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:01:52.400306 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:01:52.402096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:01:52.403074 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:01:52.406664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:01:52.407384 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:01:52.411044 ignition[1010]: INFO : Ignition 2.19.0 Nov 8 00:01:52.411044 ignition[1010]: INFO : Stage: umount Nov 8 00:01:52.411044 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:52.411044 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:01:52.415820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:01:52.416386 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:01:52.416498 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:01:52.419522 ignition[1010]: INFO : umount: umount passed Nov 8 00:01:52.419522 ignition[1010]: INFO : Ignition finished successfully Nov 8 00:01:52.420218 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:01:52.420330 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:01:52.421283 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:01:52.421327 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:01:52.422343 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:01:52.422385 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:01:52.423210 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:01:52.423244 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:01:52.424187 systemd[1]: Stopped target network.target - Network. Nov 8 00:01:52.424928 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:01:52.424985 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:01:52.426236 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:01:52.427008 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:01:52.431036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:01:52.433352 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:01:52.434539 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:01:52.435296 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:01:52.435353 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:01:52.436481 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:01:52.436526 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:01:52.437606 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:01:52.437667 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:01:52.438694 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:01:52.438741 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:01:52.439531 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:01:52.440440 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:01:52.442018 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:01:52.442121 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:01:52.443054 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:01:52.443127 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:01:52.446044 systemd-networkd[778]: eth0: DHCPv6 lease lost Nov 8 00:01:52.450010 systemd-networkd[778]: eth1: DHCPv6 lease lost Nov 8 00:01:52.453539 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:01:52.454004 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:01:52.456027 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:01:52.456123 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:01:52.459447 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:01:52.459519 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:01:52.465069 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:01:52.465662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:01:52.465729 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:01:52.468819 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:01:52.468876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:52.470121 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:01:52.470191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:01:52.471361 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:01:52.471416 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:01:52.472584 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:01:52.482673 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:01:52.482822 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:01:52.498065 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:01:52.498331 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:01:52.501158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:01:52.501207 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:01:52.502257 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:01:52.502291 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:01:52.503190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:01:52.503232 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:01:52.504665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:01:52.504707 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:01:52.506429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:01:52.506478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:01:52.513132 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:01:52.515000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:01:52.515814 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:01:52.518551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:01:52.518620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:52.519697 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:01:52.519793 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:01:52.520916 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:01:52.523680 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:01:52.536349 systemd[1]: Switching root. Nov 8 00:01:52.580097 systemd-journald[236]: Journal stopped Nov 8 00:01:53.543568 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Nov 8 00:01:53.543643 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:01:53.543658 kernel: SELinux: policy capability open_perms=1 Nov 8 00:01:53.543675 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:01:53.543684 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:01:53.543694 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:01:53.543704 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:01:53.543714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:01:53.543723 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:01:53.543734 systemd[1]: Successfully loaded SELinux policy in 35.450ms. Nov 8 00:01:53.543760 kernel: audit: type=1403 audit(1762560112.749:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:01:53.543773 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.360ms. Nov 8 00:01:53.543785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:01:53.543796 systemd[1]: Detected virtualization kvm. Nov 8 00:01:53.543807 systemd[1]: Detected architecture arm64. Nov 8 00:01:53.543821 systemd[1]: Detected first boot. Nov 8 00:01:53.543831 systemd[1]: Hostname set to . Nov 8 00:01:53.543841 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:01:53.543852 zram_generator::config[1053]: No configuration found. Nov 8 00:01:53.543868 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:01:53.543878 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:01:53.543889 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:01:53.543899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:01:53.543910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:01:53.543921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:01:53.545997 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:01:53.546030 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:01:53.546043 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:01:53.546059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:01:53.546070 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:01:53.546080 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:01:53.546091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:01:53.546101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:01:53.546113 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:01:53.546124 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:01:53.546134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:01:53.546146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:01:53.546157 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 8 00:01:53.546167 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:01:53.546177 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:01:53.546188 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:01:53.546199 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:01:53.546209 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:01:53.546221 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:01:53.546232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:01:53.546246 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:01:53.546257 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:01:53.546268 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:01:53.546278 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:01:53.546288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:01:53.546299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:01:53.546313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:01:53.546325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:01:53.546336 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:01:53.546347 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:01:53.546357 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:01:53.546368 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:01:53.546378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:01:53.546388 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:01:53.546400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:01:53.546410 systemd[1]: Reached target machines.target - Containers. Nov 8 00:01:53.546423 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:01:53.546433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:53.546444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:01:53.546457 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:01:53.546470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:01:53.546484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:01:53.546494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:01:53.546505 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:01:53.546515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:01:53.546526 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:01:53.546537 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:01:53.546548 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:01:53.546558 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:01:53.546570 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:01:53.546581 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:01:53.546603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:01:53.546616 kernel: fuse: init (API version 7.39) Nov 8 00:01:53.546628 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:01:53.546638 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:01:53.546649 kernel: loop: module loaded Nov 8 00:01:53.546659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:01:53.546669 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:01:53.546680 systemd[1]: Stopped verity-setup.service. Nov 8 00:01:53.546693 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:01:53.546704 kernel: ACPI: bus type drm_connector registered Nov 8 00:01:53.546714 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:01:53.546725 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:01:53.546737 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:01:53.546748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:01:53.546759 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:01:53.546770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:01:53.546780 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:01:53.546791 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:01:53.546801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:01:53.546812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:01:53.546823 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:01:53.546835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:01:53.546846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:01:53.546857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:01:53.546894 systemd-journald[1120]: Collecting audit messages is disabled. Nov 8 00:01:53.546919 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:01:53.546930 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:01:53.548980 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:01:53.548999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:01:53.549012 systemd-journald[1120]: Journal started Nov 8 00:01:53.549038 systemd-journald[1120]: Runtime Journal (/run/log/journal/6ae0e36f67a54c0a9a7ec30e1bc8e4be) is 8.0M, max 76.6M, 68.6M free. Nov 8 00:01:53.281675 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:01:53.308477 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:01:53.309118 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:01:53.556422 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:01:53.560134 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:01:53.563960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:01:53.570956 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:01:53.572660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:01:53.575725 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:01:53.577875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:01:53.580307 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:01:53.581232 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:01:53.597315 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:01:53.598051 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:01:53.598083 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:01:53.599819 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:01:53.606209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:01:53.609175 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:01:53.611822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:53.615718 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:01:53.619130 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:01:53.621100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:01:53.628138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:01:53.633255 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:01:53.637125 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:01:53.639883 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:01:53.643063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:01:53.645300 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:01:53.651150 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:01:53.652462 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:01:53.662679 systemd-journald[1120]: Time spent on flushing to /var/log/journal/6ae0e36f67a54c0a9a7ec30e1bc8e4be is 91.349ms for 1126 entries. Nov 8 00:01:53.662679 systemd-journald[1120]: System Journal (/var/log/journal/6ae0e36f67a54c0a9a7ec30e1bc8e4be) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:01:53.771779 systemd-journald[1120]: Received client request to flush runtime journal. Nov 8 00:01:53.771834 kernel: loop0: detected capacity change from 0 to 114328 Nov 8 00:01:53.771860 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:01:53.771875 kernel: loop1: detected capacity change from 0 to 8 Nov 8 00:01:53.663460 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:01:53.668170 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:01:53.671606 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:01:53.722990 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:01:53.734754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:01:53.738297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:53.741375 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:01:53.758653 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:01:53.772145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:01:53.776725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:01:53.799991 kernel: loop2: detected capacity change from 0 to 211168 Nov 8 00:01:53.815716 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Nov 8 00:01:53.815741 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Nov 8 00:01:53.821276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:01:53.855985 kernel: loop3: detected capacity change from 0 to 114432 Nov 8 00:01:53.901976 kernel: loop4: detected capacity change from 0 to 114328 Nov 8 00:01:53.914954 kernel: loop5: detected capacity change from 0 to 8 Nov 8 00:01:53.916953 kernel: loop6: detected capacity change from 0 to 211168 Nov 8 00:01:53.941983 kernel: loop7: detected capacity change from 0 to 114432 Nov 8 00:01:53.952199 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 8 00:01:53.953033 (sd-merge)[1193]: Merged extensions into '/usr'. Nov 8 00:01:53.963583 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:01:53.963647 systemd[1]: Reloading... Nov 8 00:01:54.086795 zram_generator::config[1219]: No configuration found. Nov 8 00:01:54.177101 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:01:54.233438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:01:54.281542 systemd[1]: Reloading finished in 317 ms. Nov 8 00:01:54.306908 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:01:54.311467 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:01:54.324308 systemd[1]: Starting ensure-sysext.service... Nov 8 00:01:54.327174 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:01:54.329530 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:01:54.343247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:01:54.348929 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:01:54.349363 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:01:54.349379 systemd[1]: Reloading... Nov 8 00:01:54.349430 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:01:54.350502 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:01:54.350971 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Nov 8 00:01:54.351044 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Nov 8 00:01:54.354785 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:01:54.354796 systemd-tmpfiles[1257]: Skipping /boot Nov 8 00:01:54.363774 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:01:54.363898 systemd-tmpfiles[1257]: Skipping /boot Nov 8 00:01:54.388547 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Nov 8 00:01:54.439957 zram_generator::config[1286]: No configuration found. Nov 8 00:01:54.619015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:01:54.632961 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:01:54.672378 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 8 00:01:54.672892 systemd[1]: Reloading finished in 323 ms. Nov 8 00:01:54.699523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:01:54.704008 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:01:54.725542 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 8 00:01:54.741964 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 8 00:01:54.742033 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:01:54.742046 kernel: [drm] features: -context_init Nov 8 00:01:54.742978 kernel: [drm] number of scanouts: 1 Nov 8 00:01:54.743047 kernel: [drm] number of cap sets: 0 Nov 8 00:01:54.745142 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 8 00:01:54.751927 kernel: Console: switching to colour frame buffer device 160x50 Nov 8 00:01:54.762004 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:01:54.764240 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:01:54.780326 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:01:54.784947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:54.791203 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1302) Nov 8 00:01:54.797224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:01:54.801840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:01:54.806497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:01:54.808246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:54.814166 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:01:54.828293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:01:54.834339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:01:54.840391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:01:54.842551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:01:54.843696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:01:54.844890 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:01:54.845752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:01:54.871225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:01:54.871618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:01:54.885891 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:01:54.900323 systemd[1]: Finished ensure-sysext.service. Nov 8 00:01:54.901434 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:01:54.906166 augenrules[1394]: No rules Nov 8 00:01:54.907065 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:01:54.908419 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:01:54.921838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:01:54.923132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:01:54.925163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:54.931202 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:01:54.935131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:01:54.942171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:01:54.945280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:01:54.947952 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:01:54.950841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:01:54.951636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:54.954134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:01:54.971038 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:01:54.974985 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:01:54.979457 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:01:54.984947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:54.985526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:01:54.986443 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:01:54.988486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:01:54.988699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:01:54.991171 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:01:54.991324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:01:54.992403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:01:54.992545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:01:54.994342 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:01:54.994754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:01:54.999018 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:01:55.005764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:01:55.010175 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:01:55.011646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:01:55.011721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:01:55.012153 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:01:55.036122 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:01:55.045681 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:01:55.060842 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:01:55.104997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:55.121190 systemd-resolved[1380]: Positive Trust Anchors: Nov 8 00:01:55.121497 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:01:55.121630 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:01:55.126875 systemd-resolved[1380]: Using system hostname 'ci-4081-3-6-n-8957f209ae'. Nov 8 00:01:55.128556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:01:55.130190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:01:55.148228 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:01:55.149161 systemd-networkd[1378]: lo: Link UP Nov 8 00:01:55.149174 systemd-networkd[1378]: lo: Gained carrier Nov 8 00:01:55.150209 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:01:55.151003 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:01:55.152038 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:01:55.152869 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:01:55.152986 systemd-networkd[1378]: Enumeration completed Nov 8 00:01:55.153975 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:01:55.154012 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:01:55.154198 systemd-timesyncd[1410]: No network connectivity, watching for changes. Nov 8 00:01:55.154555 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:01:55.154728 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:55.154786 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:01:55.155332 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:01:55.156233 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:01:55.156467 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:55.156526 systemd-networkd[1378]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:01:55.156964 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:01:55.157206 systemd-networkd[1378]: eth0: Link UP Nov 8 00:01:55.157210 systemd-networkd[1378]: eth0: Gained carrier Nov 8 00:01:55.157223 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:55.158621 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:01:55.160879 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:01:55.164174 systemd-networkd[1378]: eth1: Link UP Nov 8 00:01:55.164387 systemd-networkd[1378]: eth1: Gained carrier Nov 8 00:01:55.164460 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:01:55.169488 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:01:55.171510 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:01:55.172382 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:01:55.173118 systemd[1]: Reached target network.target - Network. Nov 8 00:01:55.173612 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:01:55.174156 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:01:55.174681 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:01:55.174719 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:01:55.181172 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:01:55.185243 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:01:55.189442 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:01:55.193190 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:01:55.195028 systemd-networkd[1378]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:01:55.200240 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:01:55.202339 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Nov 8 00:01:55.203231 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:01:55.208776 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:01:55.212001 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:01:55.217155 systemd-networkd[1378]: eth0: DHCPv4 address 46.224.42.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:01:55.222171 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 8 00:01:55.226086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:01:55.229089 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:01:55.237299 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:01:55.247178 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:01:55.251289 jq[1444]: false Nov 8 00:01:55.251928 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:01:55.252493 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:01:55.252539 dbus-daemon[1443]: [system] SELinux support is enabled Nov 8 00:01:55.262365 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:01:55.270100 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:01:55.272521 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:01:55.278744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:01:55.278948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:01:55.288499 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:01:55.288552 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:01:55.289766 extend-filesystems[1445]: Found loop4 Nov 8 00:01:55.292348 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:01:55.292378 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:01:55.293516 extend-filesystems[1445]: Found loop5 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found loop6 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found loop7 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda1 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda2 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda3 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found usr Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda4 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda6 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda7 Nov 8 00:01:55.293516 extend-filesystems[1445]: Found sda9 Nov 8 00:01:55.293516 extend-filesystems[1445]: Checking size of /dev/sda9 Nov 8 00:01:55.321873 coreos-metadata[1442]: Nov 08 00:01:55.311 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 8 00:01:55.321873 coreos-metadata[1442]: Nov 08 00:01:55.312 INFO Fetch successful Nov 8 00:01:55.321873 coreos-metadata[1442]: Nov 08 00:01:55.312 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 8 00:01:55.321873 coreos-metadata[1442]: Nov 08 00:01:55.312 INFO Fetch successful Nov 8 00:01:55.328905 extend-filesystems[1445]: Resized partition /dev/sda9 Nov 8 00:01:55.334370 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:01:55.333228 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:01:55.333823 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:01:55.340169 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 8 00:01:55.341228 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:01:55.341488 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:01:55.344070 systemd-timesyncd[1410]: Contacted time server 130.162.222.153:123 (0.flatcar.pool.ntp.org). Nov 8 00:01:55.344132 systemd-timesyncd[1410]: Initial clock synchronization to Sat 2025-11-08 00:01:55.626432 UTC. Nov 8 00:01:55.350470 jq[1458]: true Nov 8 00:01:55.366438 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:01:55.376308 tar[1461]: linux-arm64/LICENSE Nov 8 00:01:55.376308 tar[1461]: linux-arm64/helm Nov 8 00:01:55.410922 jq[1484]: true Nov 8 00:01:55.426062 update_engine[1456]: I20251108 00:01:55.425695 1456 main.cc:92] Flatcar Update Engine starting Nov 8 00:01:55.436102 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:01:55.436576 update_engine[1456]: I20251108 00:01:55.436092 1456 update_check_scheduler.cc:74] Next update check in 11m17s Nov 8 00:01:55.445140 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:01:55.456189 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:01:55.457114 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:01:55.484957 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1297) Nov 8 00:01:55.500259 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 8 00:01:55.501827 extend-filesystems[1480]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:01:55.501827 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 8 00:01:55.501827 extend-filesystems[1480]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 8 00:01:55.513705 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Nov 8 00:01:55.513705 extend-filesystems[1445]: Found sr0 Nov 8 00:01:55.510294 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:01:55.519524 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:01:55.510484 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:01:55.520036 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:01:55.540683 systemd-logind[1454]: New seat seat0. Nov 8 00:01:55.546095 systemd[1]: Starting sshkeys.service... Nov 8 00:01:55.554850 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Nov 8 00:01:55.554875 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 8 00:01:55.555091 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:01:55.578892 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:01:55.592437 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:01:55.635413 coreos-metadata[1521]: Nov 08 00:01:55.635 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 8 00:01:55.639753 coreos-metadata[1521]: Nov 08 00:01:55.638 INFO Fetch successful Nov 8 00:01:55.641596 unknown[1521]: wrote ssh authorized keys file for user: core Nov 8 00:01:55.684166 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:01:55.687988 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:01:55.694986 systemd[1]: Finished sshkeys.service. Nov 8 00:01:55.773671 containerd[1483]: time="2025-11-08T00:01:55.773554440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:01:55.823565 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:01:55.834944 containerd[1483]: time="2025-11-08T00:01:55.832542920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840144680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840186640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840204480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840371720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840388280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840439800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840453280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840625240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840641960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840654920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841007 containerd[1483]: time="2025-11-08T00:01:55.840672160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841309 containerd[1483]: time="2025-11-08T00:01:55.840745760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841357 containerd[1483]: time="2025-11-08T00:01:55.840927000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:01:55.841517 containerd[1483]: time="2025-11-08T00:01:55.841497040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:01:55.844256 containerd[1483]: time="2025-11-08T00:01:55.843948560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:01:55.844256 containerd[1483]: time="2025-11-08T00:01:55.844064600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:01:55.844256 containerd[1483]: time="2025-11-08T00:01:55.844112920Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:01:55.848858 containerd[1483]: time="2025-11-08T00:01:55.848830160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:01:55.848986 containerd[1483]: time="2025-11-08T00:01:55.848970000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:01:55.849045 containerd[1483]: time="2025-11-08T00:01:55.849032600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:01:55.849097 containerd[1483]: time="2025-11-08T00:01:55.849085160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:01:55.849177 containerd[1483]: time="2025-11-08T00:01:55.849161360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:01:55.849370 containerd[1483]: time="2025-11-08T00:01:55.849350920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:01:55.850286 containerd[1483]: time="2025-11-08T00:01:55.850264120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852092520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852117200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852134040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852147880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852160720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852173040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852187840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852204000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852217040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852229920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852241960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852261120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852275680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.852960 containerd[1483]: time="2025-11-08T00:01:55.852288680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852301320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852313560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852326880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852339400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852355040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852368320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852382320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852394360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852407520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852419800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852435640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852456600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852468200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853226 containerd[1483]: time="2025-11-08T00:01:55.852478880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852629720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852652120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852662880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852675240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852684720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852697160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852706920Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:01:55.853449 containerd[1483]: time="2025-11-08T00:01:55.852727880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.856101880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.856173360Z" level=info msg="Connect containerd service" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.856210560Z" level=info msg="using legacy CRI server" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.856217440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.856316400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.857050720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.857355120Z" level=info msg="Start subscribing containerd event" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.857418880Z" level=info msg="Start recovering state" Nov 8 00:01:55.857629 containerd[1483]: time="2025-11-08T00:01:55.857528720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:01:55.859945 containerd[1483]: time="2025-11-08T00:01:55.858136000Z" level=info msg="Start event monitor" Nov 8 00:01:55.859945 containerd[1483]: time="2025-11-08T00:01:55.858157000Z" level=info msg="Start snapshots syncer" Nov 8 00:01:55.859945 containerd[1483]: time="2025-11-08T00:01:55.858169720Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:01:55.859945 containerd[1483]: time="2025-11-08T00:01:55.858186520Z" level=info msg="Start streaming server" Nov 8 00:01:55.860207 containerd[1483]: time="2025-11-08T00:01:55.860165960Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:01:55.861105 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:01:55.861804 containerd[1483]: time="2025-11-08T00:01:55.861767200Z" level=info msg="containerd successfully booted in 0.089689s" Nov 8 00:01:56.109989 tar[1461]: linux-arm64/README.md Nov 8 00:01:56.125382 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:01:56.302177 systemd-networkd[1378]: eth1: Gained IPv6LL Nov 8 00:01:56.307141 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:01:56.308611 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:01:56.318210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:56.327302 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:01:56.360550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:01:56.367556 systemd-networkd[1378]: eth0: Gained IPv6LL Nov 8 00:01:56.547964 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:01:56.577865 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:01:56.586325 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:01:56.596252 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:01:56.596446 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:01:56.609290 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:01:56.621198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:01:56.631997 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:01:56.640375 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 8 00:01:56.641395 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:01:57.181243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:57.181486 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:01:57.183131 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:01:57.189110 systemd[1]: Startup finished in 790ms (kernel) + 5.062s (initrd) + 4.474s (userspace) = 10.327s. Nov 8 00:01:57.708946 kubelet[1573]: E1108 00:01:57.708860 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:01:57.712683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:01:57.712847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:02:07.964457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:02:07.980427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:08.100043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:08.114591 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:02:08.170598 kubelet[1592]: E1108 00:02:08.170510 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:02:08.174372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:02:08.174582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:02:18.425497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:02:18.436383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:18.556266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:18.567651 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:02:18.618327 kubelet[1607]: E1108 00:02:18.618256 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:02:18.621289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:02:18.621599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:02:28.190553 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:02:28.192204 systemd[1]: Started sshd@0-46.224.42.7:22-139.178.68.195:43814.service - OpenSSH per-connection server daemon (139.178.68.195:43814). Nov 8 00:02:28.872191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:02:28.878335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:29.001026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:29.014428 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:02:29.061558 kubelet[1625]: E1108 00:02:29.061489 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:02:29.064002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:02:29.064190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:02:29.143341 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 43814 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:29.146341 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:29.159999 systemd-logind[1454]: New session 1 of user core. Nov 8 00:02:29.161190 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:02:29.168365 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:02:29.181728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:02:29.188442 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:02:29.202760 (systemd)[1634]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:02:29.318437 systemd[1634]: Queued start job for default target default.target. Nov 8 00:02:29.325798 systemd[1634]: Created slice app.slice - User Application Slice. Nov 8 00:02:29.325923 systemd[1634]: Reached target paths.target - Paths. Nov 8 00:02:29.326171 systemd[1634]: Reached target timers.target - Timers. Nov 8 00:02:29.327980 systemd[1634]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:02:29.341974 systemd[1634]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:02:29.342157 systemd[1634]: Reached target sockets.target - Sockets. Nov 8 00:02:29.342176 systemd[1634]: Reached target basic.target - Basic System. Nov 8 00:02:29.342228 systemd[1634]: Reached target default.target - Main User Target. Nov 8 00:02:29.342264 systemd[1634]: Startup finished in 131ms. Nov 8 00:02:29.342728 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:02:29.354298 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:02:30.015999 systemd[1]: Started sshd@1-46.224.42.7:22-139.178.68.195:43824.service - OpenSSH per-connection server daemon (139.178.68.195:43824). Nov 8 00:02:30.976243 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 43824 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:30.977298 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:30.983167 systemd-logind[1454]: New session 2 of user core. Nov 8 00:02:30.997280 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:02:31.635337 sshd[1645]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:31.640019 systemd[1]: sshd@1-46.224.42.7:22-139.178.68.195:43824.service: Deactivated successfully. Nov 8 00:02:31.642144 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:02:31.643534 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:02:31.644902 systemd-logind[1454]: Removed session 2. Nov 8 00:02:31.800465 systemd[1]: Started sshd@2-46.224.42.7:22-139.178.68.195:43830.service - OpenSSH per-connection server daemon (139.178.68.195:43830). Nov 8 00:02:32.725347 sshd[1652]: Accepted publickey for core from 139.178.68.195 port 43830 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:32.727509 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:32.732209 systemd-logind[1454]: New session 3 of user core. Nov 8 00:02:32.752551 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:02:33.370841 sshd[1652]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:33.375838 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:02:33.376865 systemd[1]: sshd@2-46.224.42.7:22-139.178.68.195:43830.service: Deactivated successfully. Nov 8 00:02:33.379428 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:02:33.383767 systemd-logind[1454]: Removed session 3. Nov 8 00:02:33.532065 systemd[1]: Started sshd@3-46.224.42.7:22-139.178.68.195:49970.service - OpenSSH per-connection server daemon (139.178.68.195:49970). Nov 8 00:02:34.468432 sshd[1659]: Accepted publickey for core from 139.178.68.195 port 49970 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:34.470869 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:34.476188 systemd-logind[1454]: New session 4 of user core. Nov 8 00:02:34.485280 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:02:35.118199 sshd[1659]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:35.122601 systemd[1]: sshd@3-46.224.42.7:22-139.178.68.195:49970.service: Deactivated successfully. Nov 8 00:02:35.124685 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:02:35.125807 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:02:35.127038 systemd-logind[1454]: Removed session 4. Nov 8 00:02:35.295324 systemd[1]: Started sshd@4-46.224.42.7:22-139.178.68.195:49972.service - OpenSSH per-connection server daemon (139.178.68.195:49972). Nov 8 00:02:36.250898 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 49972 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:36.254386 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:36.258876 systemd-logind[1454]: New session 5 of user core. Nov 8 00:02:36.268314 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:02:36.774212 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:02:36.774532 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:02:36.793290 sudo[1669]: pam_unix(sudo:session): session closed for user root Nov 8 00:02:36.949669 sshd[1666]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:36.957785 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:02:36.958043 systemd[1]: sshd@4-46.224.42.7:22-139.178.68.195:49972.service: Deactivated successfully. Nov 8 00:02:36.960227 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:02:36.961686 systemd-logind[1454]: Removed session 5. Nov 8 00:02:37.114431 systemd[1]: Started sshd@5-46.224.42.7:22-139.178.68.195:49988.service - OpenSSH per-connection server daemon (139.178.68.195:49988). Nov 8 00:02:38.043712 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 49988 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:38.046282 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:38.051286 systemd-logind[1454]: New session 6 of user core. Nov 8 00:02:38.061288 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:02:38.542870 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:02:38.543252 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:02:38.547444 sudo[1678]: pam_unix(sudo:session): session closed for user root Nov 8 00:02:38.554314 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:02:38.554594 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:02:38.577396 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:02:38.579085 auditctl[1681]: No rules Nov 8 00:02:38.581285 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:02:38.581668 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:02:38.585713 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:02:38.639217 augenrules[1699]: No rules Nov 8 00:02:38.642115 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:02:38.644312 sudo[1677]: pam_unix(sudo:session): session closed for user root Nov 8 00:02:38.795783 sshd[1674]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:38.800857 systemd[1]: sshd@5-46.224.42.7:22-139.178.68.195:49988.service: Deactivated successfully. Nov 8 00:02:38.802857 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:02:38.806255 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:02:38.808000 systemd-logind[1454]: Removed session 6. Nov 8 00:02:38.967621 systemd[1]: Started sshd@6-46.224.42.7:22-139.178.68.195:49996.service - OpenSSH per-connection server daemon (139.178.68.195:49996). Nov 8 00:02:39.315199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:02:39.332366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:39.468316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:39.471403 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:02:39.513992 kubelet[1717]: E1108 00:02:39.513852 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:02:39.516622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:02:39.516784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:02:39.891360 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 49996 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:02:39.893523 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:39.899704 systemd-logind[1454]: New session 7 of user core. Nov 8 00:02:39.906338 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:02:40.391676 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:02:40.391988 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:02:40.712537 (dockerd)[1740]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:02:40.712628 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:02:40.863509 update_engine[1456]: I20251108 00:02:40.863416 1456 update_attempter.cc:509] Updating boot flags... Nov 8 00:02:40.925293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1755) Nov 8 00:02:40.983984 dockerd[1740]: time="2025-11-08T00:02:40.982191230Z" level=info msg="Starting up" Nov 8 00:02:41.014996 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1754) Nov 8 00:02:41.116414 dockerd[1740]: time="2025-11-08T00:02:41.116064765Z" level=info msg="Loading containers: start." Nov 8 00:02:41.212010 kernel: Initializing XFRM netlink socket Nov 8 00:02:41.291583 systemd-networkd[1378]: docker0: Link UP Nov 8 00:02:41.313232 dockerd[1740]: time="2025-11-08T00:02:41.313153669Z" level=info msg="Loading containers: done." Nov 8 00:02:41.325209 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2482981570-merged.mount: Deactivated successfully. Nov 8 00:02:41.330645 dockerd[1740]: time="2025-11-08T00:02:41.330586486Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:02:41.330760 dockerd[1740]: time="2025-11-08T00:02:41.330701438Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:02:41.330852 dockerd[1740]: time="2025-11-08T00:02:41.330832436Z" level=info msg="Daemon has completed initialization" Nov 8 00:02:41.372847 dockerd[1740]: time="2025-11-08T00:02:41.371777645Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:02:41.372741 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:02:42.472065 containerd[1483]: time="2025-11-08T00:02:42.472019786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:02:43.158201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253778111.mount: Deactivated successfully. Nov 8 00:02:44.121300 containerd[1483]: time="2025-11-08T00:02:44.120084923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:44.122797 containerd[1483]: time="2025-11-08T00:02:44.122765504Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390326" Nov 8 00:02:44.124737 containerd[1483]: time="2025-11-08T00:02:44.124686417Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:44.128497 containerd[1483]: time="2025-11-08T00:02:44.128436022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:44.131359 containerd[1483]: time="2025-11-08T00:02:44.131302808Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.659223487s" Nov 8 00:02:44.131517 containerd[1483]: time="2025-11-08T00:02:44.131493495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 8 00:02:44.134359 containerd[1483]: time="2025-11-08T00:02:44.134315071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:02:45.308291 containerd[1483]: time="2025-11-08T00:02:45.308222026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:45.310530 containerd[1483]: time="2025-11-08T00:02:45.310425664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547937" Nov 8 00:02:45.310530 containerd[1483]: time="2025-11-08T00:02:45.310479557Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:45.316973 containerd[1483]: time="2025-11-08T00:02:45.316266518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:45.318962 containerd[1483]: time="2025-11-08T00:02:45.317724500Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.183203579s" Nov 8 00:02:45.318962 containerd[1483]: time="2025-11-08T00:02:45.317772912Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 8 00:02:45.320392 containerd[1483]: time="2025-11-08T00:02:45.320350478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:02:46.381974 containerd[1483]: time="2025-11-08T00:02:46.380451480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:46.381974 containerd[1483]: time="2025-11-08T00:02:46.381869918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295997" Nov 8 00:02:46.381974 containerd[1483]: time="2025-11-08T00:02:46.381894563Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:46.385144 containerd[1483]: time="2025-11-08T00:02:46.385090361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:46.386552 containerd[1483]: time="2025-11-08T00:02:46.386506558Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.06611011s" Nov 8 00:02:46.386552 containerd[1483]: time="2025-11-08T00:02:46.386550608Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 8 00:02:46.387136 containerd[1483]: time="2025-11-08T00:02:46.386991427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:02:47.473457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353795181.mount: Deactivated successfully. Nov 8 00:02:47.849867 containerd[1483]: time="2025-11-08T00:02:47.849263049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:47.850930 containerd[1483]: time="2025-11-08T00:02:47.850459745Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240132" Nov 8 00:02:47.852544 containerd[1483]: time="2025-11-08T00:02:47.851982192Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:47.858464 containerd[1483]: time="2025-11-08T00:02:47.858399848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:47.861259 containerd[1483]: time="2025-11-08T00:02:47.861158319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.474135325s" Nov 8 00:02:47.861259 containerd[1483]: time="2025-11-08T00:02:47.861206089Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 8 00:02:47.862002 containerd[1483]: time="2025-11-08T00:02:47.861974854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:02:48.547110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853966992.mount: Deactivated successfully. Nov 8 00:02:49.179530 containerd[1483]: time="2025-11-08T00:02:49.179296985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:49.181583 containerd[1483]: time="2025-11-08T00:02:49.181438726Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Nov 8 00:02:49.182731 containerd[1483]: time="2025-11-08T00:02:49.182627959Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:49.188031 containerd[1483]: time="2025-11-08T00:02:49.186034787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:49.188031 containerd[1483]: time="2025-11-08T00:02:49.187269830Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.325259248s" Nov 8 00:02:49.188031 containerd[1483]: time="2025-11-08T00:02:49.187302476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 8 00:02:49.189364 containerd[1483]: time="2025-11-08T00:02:49.188675025Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:02:49.645458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:02:49.654325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:49.770445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:49.777868 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:02:49.815847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195077956.mount: Deactivated successfully. Nov 8 00:02:49.823597 containerd[1483]: time="2025-11-08T00:02:49.823551946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:49.825596 containerd[1483]: time="2025-11-08T00:02:49.825403349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 8 00:02:49.826583 containerd[1483]: time="2025-11-08T00:02:49.826540412Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:49.827735 kubelet[2027]: E1108 00:02:49.827344 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:02:49.830874 containerd[1483]: time="2025-11-08T00:02:49.830843136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:49.831596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:02:49.831741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:02:49.833065 containerd[1483]: time="2025-11-08T00:02:49.832715784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 643.992789ms" Nov 8 00:02:49.833065 containerd[1483]: time="2025-11-08T00:02:49.832750591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 8 00:02:49.833414 containerd[1483]: time="2025-11-08T00:02:49.833395237Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:02:50.456042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560559127.mount: Deactivated successfully. Nov 8 00:02:51.924962 containerd[1483]: time="2025-11-08T00:02:51.924787082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:51.926266 containerd[1483]: time="2025-11-08T00:02:51.926067033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465913" Nov 8 00:02:51.928953 containerd[1483]: time="2025-11-08T00:02:51.927394752Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:51.931121 containerd[1483]: time="2025-11-08T00:02:51.931074535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:02:51.932869 containerd[1483]: time="2025-11-08T00:02:51.932826731Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.09936036s" Nov 8 00:02:51.933017 containerd[1483]: time="2025-11-08T00:02:51.932993761Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 8 00:02:57.209479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:57.220449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:57.257444 systemd[1]: Reloading requested from client PID 2120 ('systemctl') (unit session-7.scope)... Nov 8 00:02:57.257461 systemd[1]: Reloading... Nov 8 00:02:57.383059 zram_generator::config[2163]: No configuration found. Nov 8 00:02:57.485409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:02:57.557113 systemd[1]: Reloading finished in 299 ms. Nov 8 00:02:57.607419 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:02:57.607592 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:02:57.608148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:57.616418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:02:57.737010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:02:57.752522 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:02:57.801599 kubelet[2209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:02:57.802005 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:02:57.802120 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:02:57.802347 kubelet[2209]: I1108 00:02:57.802295 2209 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:02:58.691066 kubelet[2209]: I1108 00:02:58.691027 2209 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:02:58.691284 kubelet[2209]: I1108 00:02:58.691272 2209 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:02:58.691594 kubelet[2209]: I1108 00:02:58.691578 2209 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:02:58.720592 kubelet[2209]: E1108 00:02:58.720526 2209 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.224.42.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:02:58.722017 kubelet[2209]: I1108 00:02:58.721544 2209 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:02:58.734163 kubelet[2209]: E1108 00:02:58.734096 2209 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:02:58.734163 kubelet[2209]: I1108 00:02:58.734148 2209 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:02:58.737318 kubelet[2209]: I1108 00:02:58.737282 2209 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:02:58.738908 kubelet[2209]: I1108 00:02:58.738842 2209 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:02:58.739089 kubelet[2209]: I1108 00:02:58.738896 2209 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8957f209ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:02:58.739230 kubelet[2209]: I1108 00:02:58.739138 2209 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:02:58.739230 kubelet[2209]: I1108 00:02:58.739149 2209 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:02:58.739450 kubelet[2209]: I1108 00:02:58.739408 2209 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:02:58.744001 kubelet[2209]: I1108 00:02:58.743948 2209 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:02:58.744001 kubelet[2209]: I1108 00:02:58.743985 2209 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:02:58.744001 kubelet[2209]: I1108 00:02:58.744018 2209 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:02:58.746153 kubelet[2209]: I1108 00:02:58.744034 2209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:02:58.749304 kubelet[2209]: E1108 00:02:58.749263 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.224.42.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:02:58.749685 kubelet[2209]: E1108 00:02:58.749646 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.224.42.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8957f209ae&limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:02:58.749768 kubelet[2209]: I1108 00:02:58.749749 2209 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:02:58.750632 kubelet[2209]: I1108 00:02:58.750597 2209 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:02:58.750751 kubelet[2209]: W1108 00:02:58.750735 2209 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:02:58.754847 kubelet[2209]: I1108 00:02:58.754814 2209 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:02:58.754948 kubelet[2209]: I1108 00:02:58.754864 2209 server.go:1289] "Started kubelet" Nov 8 00:02:58.755690 kubelet[2209]: I1108 00:02:58.755652 2209 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:02:58.756753 kubelet[2209]: I1108 00:02:58.756734 2209 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:02:58.757611 kubelet[2209]: I1108 00:02:58.757545 2209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:02:58.757914 kubelet[2209]: I1108 00:02:58.757883 2209 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:02:58.760464 kubelet[2209]: E1108 00:02:58.758044 2209 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.42.7:6443/api/v1/namespaces/default/events\": dial tcp 46.224.42.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-8957f209ae.1875df24e5ee578f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-8957f209ae,UID:ci-4081-3-6-n-8957f209ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8957f209ae,},FirstTimestamp:2025-11-08 00:02:58.754836367 +0000 UTC m=+0.996669724,LastTimestamp:2025-11-08 00:02:58.754836367 +0000 UTC m=+0.996669724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8957f209ae,}" Nov 8 00:02:58.760822 kubelet[2209]: I1108 00:02:58.760791 2209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:02:58.762071 kubelet[2209]: I1108 00:02:58.762049 2209 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:02:58.769228 kubelet[2209]: E1108 00:02:58.769151 2209 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:02:58.772386 kubelet[2209]: E1108 00:02:58.769522 2209 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:02:58.772386 kubelet[2209]: I1108 00:02:58.769551 2209 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:02:58.772386 kubelet[2209]: I1108 00:02:58.769747 2209 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:02:58.772386 kubelet[2209]: I1108 00:02:58.769811 2209 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:02:58.772386 kubelet[2209]: E1108 00:02:58.770234 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.224.42.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:02:58.774563 kubelet[2209]: I1108 00:02:58.774535 2209 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:02:58.774808 kubelet[2209]: I1108 00:02:58.774787 2209 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:02:58.775816 kubelet[2209]: E1108 00:02:58.775787 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8957f209ae?timeout=10s\": dial tcp 46.224.42.7:6443: connect: connection refused" interval="200ms" Nov 8 00:02:58.776518 kubelet[2209]: I1108 00:02:58.776496 2209 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:02:58.786432 kubelet[2209]: I1108 00:02:58.786375 2209 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:02:58.787605 kubelet[2209]: I1108 00:02:58.787584 2209 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:02:58.787719 kubelet[2209]: I1108 00:02:58.787709 2209 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:02:58.787784 kubelet[2209]: I1108 00:02:58.787772 2209 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:02:58.787826 kubelet[2209]: I1108 00:02:58.787819 2209 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:02:58.787927 kubelet[2209]: E1108 00:02:58.787910 2209 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:02:58.799231 kubelet[2209]: E1108 00:02:58.799192 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.224.42.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:02:58.802804 kubelet[2209]: I1108 00:02:58.802515 2209 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:02:58.802804 kubelet[2209]: I1108 00:02:58.802534 2209 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:02:58.802804 kubelet[2209]: I1108 00:02:58.802554 2209 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:02:58.805826 kubelet[2209]: I1108 00:02:58.805554 2209 policy_none.go:49] "None policy: Start" Nov 8 00:02:58.805826 kubelet[2209]: I1108 00:02:58.805581 2209 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:02:58.805826 kubelet[2209]: I1108 00:02:58.805594 2209 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:02:58.812431 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:02:58.831045 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:02:58.846022 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:02:58.847747 kubelet[2209]: E1108 00:02:58.847698 2209 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:02:58.847974 kubelet[2209]: I1108 00:02:58.847953 2209 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:02:58.848015 kubelet[2209]: I1108 00:02:58.847974 2209 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:02:58.848344 kubelet[2209]: I1108 00:02:58.848322 2209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:02:58.849620 kubelet[2209]: E1108 00:02:58.849599 2209 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:02:58.850569 kubelet[2209]: E1108 00:02:58.850538 2209 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:02:58.901820 systemd[1]: Created slice kubepods-burstable-pod1cd77aa55f3018b1df86b5f4c7500b7d.slice - libcontainer container kubepods-burstable-pod1cd77aa55f3018b1df86b5f4c7500b7d.slice. Nov 8 00:02:58.909214 kubelet[2209]: E1108 00:02:58.909106 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.913447 systemd[1]: Created slice kubepods-burstable-poda03cffa0e793210445fa38a3b0eb0333.slice - libcontainer container kubepods-burstable-poda03cffa0e793210445fa38a3b0eb0333.slice. Nov 8 00:02:58.925106 kubelet[2209]: E1108 00:02:58.924623 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.929265 systemd[1]: Created slice kubepods-burstable-poddaa5bc9a327849fdfec92ba6b2120677.slice - libcontainer container kubepods-burstable-poddaa5bc9a327849fdfec92ba6b2120677.slice. Nov 8 00:02:58.931493 kubelet[2209]: E1108 00:02:58.931467 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.952141 kubelet[2209]: I1108 00:02:58.951223 2209 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.952141 kubelet[2209]: E1108 00:02:58.951827 2209 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.42.7:6443/api/v1/nodes\": dial tcp 46.224.42.7:6443: connect: connection refused" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971456 kubelet[2209]: I1108 00:02:58.970816 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa5bc9a327849fdfec92ba6b2120677-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" (UID: \"daa5bc9a327849fdfec92ba6b2120677\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971456 kubelet[2209]: I1108 00:02:58.970903 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971456 kubelet[2209]: I1108 00:02:58.971005 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971456 kubelet[2209]: I1108 00:02:58.971053 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971456 kubelet[2209]: I1108 00:02:58.971091 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971873 kubelet[2209]: I1108 00:02:58.971127 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a03cffa0e793210445fa38a3b0eb0333-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8957f209ae\" (UID: \"a03cffa0e793210445fa38a3b0eb0333\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971873 kubelet[2209]: I1108 00:02:58.971159 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa5bc9a327849fdfec92ba6b2120677-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" (UID: \"daa5bc9a327849fdfec92ba6b2120677\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971873 kubelet[2209]: I1108 00:02:58.971247 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa5bc9a327849fdfec92ba6b2120677-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" (UID: \"daa5bc9a327849fdfec92ba6b2120677\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.971873 kubelet[2209]: I1108 00:02:58.971284 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:02:58.977287 kubelet[2209]: E1108 00:02:58.977172 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8957f209ae?timeout=10s\": dial tcp 46.224.42.7:6443: connect: connection refused" interval="400ms" Nov 8 00:02:59.155287 kubelet[2209]: I1108 00:02:59.155202 2209 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:59.155803 kubelet[2209]: E1108 00:02:59.155770 2209 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.42.7:6443/api/v1/nodes\": dial tcp 46.224.42.7:6443: connect: connection refused" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:59.211715 containerd[1483]: time="2025-11-08T00:02:59.211597444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8957f209ae,Uid:1cd77aa55f3018b1df86b5f4c7500b7d,Namespace:kube-system,Attempt:0,}" Nov 8 00:02:59.225691 containerd[1483]: time="2025-11-08T00:02:59.225596791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8957f209ae,Uid:a03cffa0e793210445fa38a3b0eb0333,Namespace:kube-system,Attempt:0,}" Nov 8 00:02:59.233697 containerd[1483]: time="2025-11-08T00:02:59.233482763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8957f209ae,Uid:daa5bc9a327849fdfec92ba6b2120677,Namespace:kube-system,Attempt:0,}" Nov 8 00:02:59.377899 kubelet[2209]: E1108 00:02:59.377834 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8957f209ae?timeout=10s\": dial tcp 46.224.42.7:6443: connect: connection refused" interval="800ms" Nov 8 00:02:59.559905 kubelet[2209]: I1108 00:02:59.559815 2209 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:59.560747 kubelet[2209]: E1108 00:02:59.560679 2209 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.42.7:6443/api/v1/nodes\": dial tcp 46.224.42.7:6443: connect: connection refused" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:02:59.611134 kubelet[2209]: E1108 00:02:59.611097 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.224.42.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:02:59.693975 kubelet[2209]: E1108 00:02:59.693683 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.224.42.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:02:59.776206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411562897.mount: Deactivated successfully. Nov 8 00:02:59.784319 containerd[1483]: time="2025-11-08T00:02:59.784244256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:02:59.785368 containerd[1483]: time="2025-11-08T00:02:59.785267312Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:02:59.786693 containerd[1483]: time="2025-11-08T00:02:59.786549243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:02:59.788209 containerd[1483]: time="2025-11-08T00:02:59.788157538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Nov 8 00:02:59.790140 containerd[1483]: time="2025-11-08T00:02:59.790109638Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:02:59.791571 containerd[1483]: time="2025-11-08T00:02:59.791521226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:02:59.792737 containerd[1483]: time="2025-11-08T00:02:59.792677660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:02:59.794544 containerd[1483]: time="2025-11-08T00:02:59.794513665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:02:59.796965 containerd[1483]: time="2025-11-08T00:02:59.796051110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.834064ms" Nov 8 00:02:59.797260 containerd[1483]: time="2025-11-08T00:02:59.797178661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.498698ms" Nov 8 00:02:59.801565 containerd[1483]: time="2025-11-08T00:02:59.801409425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.851413ms" Nov 8 00:02:59.835827 kubelet[2209]: E1108 00:02:59.835717 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.224.42.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8957f209ae&limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:02:59.916189 containerd[1483]: time="2025-11-08T00:02:59.916008429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:02:59.916358 containerd[1483]: time="2025-11-08T00:02:59.916163689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:02:59.916358 containerd[1483]: time="2025-11-08T00:02:59.916279145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:02:59.918126 containerd[1483]: time="2025-11-08T00:02:59.918071744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:02:59.922880 kubelet[2209]: E1108 00:02:59.922656 2209 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.224.42.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.42.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:02:59.925549 containerd[1483]: time="2025-11-08T00:02:59.925418044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:02:59.925549 containerd[1483]: time="2025-11-08T00:02:59.925498614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:02:59.925785 containerd[1483]: time="2025-11-08T00:02:59.925525738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:02:59.926997 containerd[1483]: time="2025-11-08T00:02:59.925778252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:02:59.928497 containerd[1483]: time="2025-11-08T00:02:59.928378638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:02:59.928497 containerd[1483]: time="2025-11-08T00:02:59.928456729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:02:59.928497 containerd[1483]: time="2025-11-08T00:02:59.928471971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:02:59.928728 containerd[1483]: time="2025-11-08T00:02:59.928565143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:02:59.952179 systemd[1]: Started cri-containerd-db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6.scope - libcontainer container db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6. Nov 8 00:02:59.953869 systemd[1]: Started cri-containerd-faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86.scope - libcontainer container faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86. Nov 8 00:02:59.961899 systemd[1]: Started cri-containerd-2ed7a1fc08ae334af4e698b81176798da48ab754051d8242d0cd49280dc42354.scope - libcontainer container 2ed7a1fc08ae334af4e698b81176798da48ab754051d8242d0cd49280dc42354. Nov 8 00:03:00.029209 containerd[1483]: time="2025-11-08T00:03:00.028997656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8957f209ae,Uid:1cd77aa55f3018b1df86b5f4c7500b7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6\"" Nov 8 00:03:00.036057 containerd[1483]: time="2025-11-08T00:03:00.035968316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8957f209ae,Uid:daa5bc9a327849fdfec92ba6b2120677,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed7a1fc08ae334af4e698b81176798da48ab754051d8242d0cd49280dc42354\"" Nov 8 00:03:00.041359 containerd[1483]: time="2025-11-08T00:03:00.041054732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8957f209ae,Uid:a03cffa0e793210445fa38a3b0eb0333,Namespace:kube-system,Attempt:0,} returns sandbox id \"faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86\"" Nov 8 00:03:00.041661 containerd[1483]: time="2025-11-08T00:03:00.041529833Z" level=info msg="CreateContainer within sandbox \"db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:03:00.042605 containerd[1483]: time="2025-11-08T00:03:00.042510920Z" level=info msg="CreateContainer within sandbox \"2ed7a1fc08ae334af4e698b81176798da48ab754051d8242d0cd49280dc42354\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:03:00.046604 containerd[1483]: time="2025-11-08T00:03:00.046565243Z" level=info msg="CreateContainer within sandbox \"faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:03:00.072694 containerd[1483]: time="2025-11-08T00:03:00.072431501Z" level=info msg="CreateContainer within sandbox \"db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28\"" Nov 8 00:03:00.074835 containerd[1483]: time="2025-11-08T00:03:00.074781564Z" level=info msg="CreateContainer within sandbox \"faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0\"" Nov 8 00:03:00.075115 containerd[1483]: time="2025-11-08T00:03:00.075093364Z" level=info msg="StartContainer for \"94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28\"" Nov 8 00:03:00.076273 containerd[1483]: time="2025-11-08T00:03:00.076143219Z" level=info msg="CreateContainer within sandbox \"2ed7a1fc08ae334af4e698b81176798da48ab754051d8242d0cd49280dc42354\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"33a4d63ee09e2dec761c2b44e69fb95ffcb0b40b08529a41e3a4e5006944dbde\"" Nov 8 00:03:00.076893 containerd[1483]: time="2025-11-08T00:03:00.076866993Z" level=info msg="StartContainer for \"33a4d63ee09e2dec761c2b44e69fb95ffcb0b40b08529a41e3a4e5006944dbde\"" Nov 8 00:03:00.085142 containerd[1483]: time="2025-11-08T00:03:00.085101375Z" level=info msg="StartContainer for \"b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0\"" Nov 8 00:03:00.108662 systemd[1]: Started cri-containerd-94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28.scope - libcontainer container 94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28. Nov 8 00:03:00.122155 systemd[1]: Started cri-containerd-33a4d63ee09e2dec761c2b44e69fb95ffcb0b40b08529a41e3a4e5006944dbde.scope - libcontainer container 33a4d63ee09e2dec761c2b44e69fb95ffcb0b40b08529a41e3a4e5006944dbde. Nov 8 00:03:00.133189 systemd[1]: Started cri-containerd-b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0.scope - libcontainer container b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0. Nov 8 00:03:00.176204 containerd[1483]: time="2025-11-08T00:03:00.176159805Z" level=info msg="StartContainer for \"94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28\" returns successfully" Nov 8 00:03:00.180080 kubelet[2209]: E1108 00:03:00.179882 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.42.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8957f209ae?timeout=10s\": dial tcp 46.224.42.7:6443: connect: connection refused" interval="1.6s" Nov 8 00:03:00.188242 containerd[1483]: time="2025-11-08T00:03:00.188195798Z" level=info msg="StartContainer for \"33a4d63ee09e2dec761c2b44e69fb95ffcb0b40b08529a41e3a4e5006944dbde\" returns successfully" Nov 8 00:03:00.217194 containerd[1483]: time="2025-11-08T00:03:00.217098248Z" level=info msg="StartContainer for \"b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0\" returns successfully" Nov 8 00:03:00.362766 kubelet[2209]: I1108 00:03:00.362630 2209 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:00.811249 kubelet[2209]: E1108 00:03:00.810839 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:00.816025 kubelet[2209]: E1108 00:03:00.814660 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:00.816025 kubelet[2209]: E1108 00:03:00.814970 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:01.819731 kubelet[2209]: E1108 00:03:01.819234 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:01.819731 kubelet[2209]: E1108 00:03:01.819504 2209 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8957f209ae\" not found" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:02.393961 kubelet[2209]: I1108 00:03:02.393346 2209 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:02.393961 kubelet[2209]: E1108 00:03:02.393452 2209 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-8957f209ae\": node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:02.524912 kubelet[2209]: E1108 00:03:02.524794 2209 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:02.625866 kubelet[2209]: E1108 00:03:02.625821 2209 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:02.726112 kubelet[2209]: E1108 00:03:02.726071 2209 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:02.826774 kubelet[2209]: E1108 00:03:02.826737 2209 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:02.927503 kubelet[2209]: E1108 00:03:02.927453 2209 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:03.070739 kubelet[2209]: I1108 00:03:03.070441 2209 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:03.081194 kubelet[2209]: E1108 00:03:03.081154 2209 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:03.081194 kubelet[2209]: I1108 00:03:03.081188 2209 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:03.083925 kubelet[2209]: E1108 00:03:03.083668 2209 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:03.083925 kubelet[2209]: I1108 00:03:03.083743 2209 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:03.089950 kubelet[2209]: E1108 00:03:03.088045 2209 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-8957f209ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:03.747759 kubelet[2209]: I1108 00:03:03.747482 2209 apiserver.go:52] "Watching apiserver" Nov 8 00:03:03.770544 kubelet[2209]: I1108 00:03:03.770492 2209 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:03:04.705721 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-7.scope)... Nov 8 00:03:04.706085 systemd[1]: Reloading... Nov 8 00:03:04.795984 zram_generator::config[2532]: No configuration found. Nov 8 00:03:04.906414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:03:04.995072 systemd[1]: Reloading finished in 288 ms. Nov 8 00:03:05.040470 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:03:05.053453 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:03:05.053812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:03:05.053888 systemd[1]: kubelet.service: Consumed 1.383s CPU time, 128.1M memory peak, 0B memory swap peak. Nov 8 00:03:05.060323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:03:05.196648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:03:05.207877 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:03:05.267794 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:03:05.267794 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:03:05.267794 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:03:05.268197 kubelet[2577]: I1108 00:03:05.268074 2577 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:03:05.279405 kubelet[2577]: I1108 00:03:05.279323 2577 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:03:05.279405 kubelet[2577]: I1108 00:03:05.279373 2577 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:03:05.279696 kubelet[2577]: I1108 00:03:05.279665 2577 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:03:05.281397 kubelet[2577]: I1108 00:03:05.281350 2577 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:03:05.284931 kubelet[2577]: I1108 00:03:05.284242 2577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:03:05.289819 kubelet[2577]: E1108 00:03:05.289783 2577 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:03:05.290138 kubelet[2577]: I1108 00:03:05.290118 2577 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:03:05.292793 kubelet[2577]: I1108 00:03:05.292760 2577 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:03:05.293563 kubelet[2577]: I1108 00:03:05.293263 2577 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:03:05.293563 kubelet[2577]: I1108 00:03:05.293294 2577 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8957f209ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:03:05.293563 kubelet[2577]: I1108 00:03:05.293469 2577 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:03:05.293563 kubelet[2577]: I1108 00:03:05.293483 2577 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:03:05.293563 kubelet[2577]: I1108 00:03:05.293532 2577 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:03:05.294028 kubelet[2577]: I1108 00:03:05.294008 2577 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:03:05.294115 kubelet[2577]: I1108 00:03:05.294104 2577 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:03:05.294186 kubelet[2577]: I1108 00:03:05.294177 2577 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:03:05.294254 kubelet[2577]: I1108 00:03:05.294245 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:03:05.300799 kubelet[2577]: I1108 00:03:05.300730 2577 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:03:05.301763 kubelet[2577]: I1108 00:03:05.301730 2577 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:03:05.307529 kubelet[2577]: I1108 00:03:05.306137 2577 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:03:05.307529 kubelet[2577]: I1108 00:03:05.306220 2577 server.go:1289] "Started kubelet" Nov 8 00:03:05.310433 kubelet[2577]: I1108 00:03:05.310404 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:03:05.311861 kubelet[2577]: I1108 00:03:05.311814 2577 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:03:05.333549 kubelet[2577]: I1108 00:03:05.333503 2577 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:03:05.335907 kubelet[2577]: I1108 00:03:05.317375 2577 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:03:05.343092 kubelet[2577]: I1108 00:03:05.315661 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:03:05.343344 kubelet[2577]: I1108 00:03:05.317387 2577 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:03:05.343397 kubelet[2577]: I1108 00:03:05.326892 2577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:03:05.344105 kubelet[2577]: I1108 00:03:05.343623 2577 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:03:05.344238 kubelet[2577]: I1108 00:03:05.328034 2577 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:03:05.344418 kubelet[2577]: I1108 00:03:05.344395 2577 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:03:05.346379 kubelet[2577]: I1108 00:03:05.346363 2577 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:03:05.349997 kubelet[2577]: E1108 00:03:05.317418 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8957f209ae\" not found" Nov 8 00:03:05.351298 kubelet[2577]: E1108 00:03:05.351263 2577 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:03:05.358305 kubelet[2577]: I1108 00:03:05.358271 2577 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:03:05.361082 kubelet[2577]: I1108 00:03:05.361050 2577 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:03:05.361223 kubelet[2577]: I1108 00:03:05.361213 2577 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:03:05.361288 kubelet[2577]: I1108 00:03:05.361279 2577 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:03:05.361338 kubelet[2577]: I1108 00:03:05.361330 2577 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:03:05.361474 kubelet[2577]: E1108 00:03:05.361454 2577 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:03:05.374114 kubelet[2577]: I1108 00:03:05.374070 2577 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:03:05.436146 kubelet[2577]: I1108 00:03:05.436111 2577 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:03:05.436146 kubelet[2577]: I1108 00:03:05.436132 2577 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:03:05.436146 kubelet[2577]: I1108 00:03:05.436156 2577 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:03:05.436336 kubelet[2577]: I1108 00:03:05.436292 2577 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:03:05.436336 kubelet[2577]: I1108 00:03:05.436302 2577 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:03:05.436336 kubelet[2577]: I1108 00:03:05.436319 2577 policy_none.go:49] "None policy: Start" Nov 8 00:03:05.436336 kubelet[2577]: I1108 00:03:05.436327 2577 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:03:05.436336 kubelet[2577]: I1108 00:03:05.436335 2577 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:03:05.436478 kubelet[2577]: I1108 00:03:05.436419 2577 state_mem.go:75] "Updated machine memory state" Nov 8 00:03:05.440735 kubelet[2577]: E1108 00:03:05.440659 2577 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:03:05.440903 kubelet[2577]: I1108 00:03:05.440838 2577 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:03:05.440903 kubelet[2577]: I1108 00:03:05.440850 2577 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:03:05.443144 kubelet[2577]: I1108 00:03:05.443057 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:03:05.443783 kubelet[2577]: E1108 00:03:05.443552 2577 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:03:05.463199 kubelet[2577]: I1108 00:03:05.462757 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.463640 kubelet[2577]: I1108 00:03:05.463624 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.464002 kubelet[2577]: I1108 00:03:05.463988 2577 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.547548 kubelet[2577]: I1108 00:03:05.546232 2577 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549020 kubelet[2577]: I1108 00:03:05.548137 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549020 kubelet[2577]: I1108 00:03:05.548184 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a03cffa0e793210445fa38a3b0eb0333-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8957f209ae\" (UID: \"a03cffa0e793210445fa38a3b0eb0333\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549020 kubelet[2577]: I1108 00:03:05.548206 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa5bc9a327849fdfec92ba6b2120677-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" (UID: \"daa5bc9a327849fdfec92ba6b2120677\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549020 kubelet[2577]: I1108 00:03:05.548224 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549020 kubelet[2577]: I1108 00:03:05.548242 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa5bc9a327849fdfec92ba6b2120677-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" (UID: \"daa5bc9a327849fdfec92ba6b2120677\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549223 kubelet[2577]: I1108 00:03:05.548257 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa5bc9a327849fdfec92ba6b2120677-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8957f209ae\" (UID: \"daa5bc9a327849fdfec92ba6b2120677\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549223 kubelet[2577]: I1108 00:03:05.548274 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549223 kubelet[2577]: I1108 00:03:05.548290 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.549223 kubelet[2577]: I1108 00:03:05.548307 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cd77aa55f3018b1df86b5f4c7500b7d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8957f209ae\" (UID: \"1cd77aa55f3018b1df86b5f4c7500b7d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.562204 kubelet[2577]: I1108 00:03:05.562166 2577 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:05.562379 kubelet[2577]: I1108 00:03:05.562259 2577 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:06.299698 kubelet[2577]: I1108 00:03:06.299640 2577 apiserver.go:52] "Watching apiserver" Nov 8 00:03:06.344737 kubelet[2577]: I1108 00:03:06.344611 2577 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:03:06.450199 kubelet[2577]: I1108 00:03:06.450015 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8957f209ae" podStartSLOduration=1.449992159 podStartE2EDuration="1.449992159s" podCreationTimestamp="2025-11-08 00:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:03:06.436440013 +0000 UTC m=+1.222017946" watchObservedRunningTime="2025-11-08 00:03:06.449992159 +0000 UTC m=+1.235570052" Nov 8 00:03:06.464726 kubelet[2577]: I1108 00:03:06.464618 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8957f209ae" podStartSLOduration=1.46460078 podStartE2EDuration="1.46460078s" podCreationTimestamp="2025-11-08 00:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:03:06.463452095 +0000 UTC m=+1.249030028" watchObservedRunningTime="2025-11-08 00:03:06.46460078 +0000 UTC m=+1.250178673" Nov 8 00:03:06.464726 kubelet[2577]: I1108 00:03:06.464695 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8957f209ae" podStartSLOduration=1.464691349 podStartE2EDuration="1.464691349s" podCreationTimestamp="2025-11-08 00:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:03:06.447987583 +0000 UTC m=+1.233565476" watchObservedRunningTime="2025-11-08 00:03:06.464691349 +0000 UTC m=+1.250269242" Nov 8 00:03:12.457782 kubelet[2577]: I1108 00:03:12.457384 2577 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:03:12.459412 containerd[1483]: time="2025-11-08T00:03:12.459367510Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:03:12.459985 kubelet[2577]: I1108 00:03:12.459596 2577 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:03:13.646425 systemd[1]: Created slice kubepods-besteffort-pode91b386a_27b1_445a_9ac6_f764450d0c94.slice - libcontainer container kubepods-besteffort-pode91b386a_27b1_445a_9ac6_f764450d0c94.slice. Nov 8 00:03:13.707153 kubelet[2577]: I1108 00:03:13.706970 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e91b386a-27b1-445a-9ac6-f764450d0c94-kube-proxy\") pod \"kube-proxy-z47bf\" (UID: \"e91b386a-27b1-445a-9ac6-f764450d0c94\") " pod="kube-system/kube-proxy-z47bf" Nov 8 00:03:13.707153 kubelet[2577]: I1108 00:03:13.707024 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e91b386a-27b1-445a-9ac6-f764450d0c94-xtables-lock\") pod \"kube-proxy-z47bf\" (UID: \"e91b386a-27b1-445a-9ac6-f764450d0c94\") " pod="kube-system/kube-proxy-z47bf" Nov 8 00:03:13.707153 kubelet[2577]: I1108 00:03:13.707048 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e91b386a-27b1-445a-9ac6-f764450d0c94-lib-modules\") pod \"kube-proxy-z47bf\" (UID: \"e91b386a-27b1-445a-9ac6-f764450d0c94\") " pod="kube-system/kube-proxy-z47bf" Nov 8 00:03:13.707153 kubelet[2577]: I1108 00:03:13.707066 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlm42\" (UniqueName: \"kubernetes.io/projected/e91b386a-27b1-445a-9ac6-f764450d0c94-kube-api-access-wlm42\") pod \"kube-proxy-z47bf\" (UID: \"e91b386a-27b1-445a-9ac6-f764450d0c94\") " pod="kube-system/kube-proxy-z47bf" Nov 8 00:03:13.728721 systemd[1]: Created slice kubepods-besteffort-pod0fbaf294_4258_4835_9934_dab81661d270.slice - libcontainer container kubepods-besteffort-pod0fbaf294_4258_4835_9934_dab81661d270.slice. Nov 8 00:03:13.808109 kubelet[2577]: I1108 00:03:13.807977 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fbaf294-4258-4835-9934-dab81661d270-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hd7px\" (UID: \"0fbaf294-4258-4835-9934-dab81661d270\") " pod="tigera-operator/tigera-operator-7dcd859c48-hd7px" Nov 8 00:03:13.808312 kubelet[2577]: I1108 00:03:13.808145 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qf7\" (UniqueName: \"kubernetes.io/projected/0fbaf294-4258-4835-9934-dab81661d270-kube-api-access-57qf7\") pod \"tigera-operator-7dcd859c48-hd7px\" (UID: \"0fbaf294-4258-4835-9934-dab81661d270\") " pod="tigera-operator/tigera-operator-7dcd859c48-hd7px" Nov 8 00:03:13.959231 containerd[1483]: time="2025-11-08T00:03:13.958831527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z47bf,Uid:e91b386a-27b1-445a-9ac6-f764450d0c94,Namespace:kube-system,Attempt:0,}" Nov 8 00:03:13.990576 containerd[1483]: time="2025-11-08T00:03:13.990116409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:13.990576 containerd[1483]: time="2025-11-08T00:03:13.990186176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:13.990576 containerd[1483]: time="2025-11-08T00:03:13.990215859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:13.991215 containerd[1483]: time="2025-11-08T00:03:13.990632057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:14.020286 systemd[1]: Started cri-containerd-98143267f658e92bf7b0e1bbaa366f57e79b3fbb43f226409d650361ad363160.scope - libcontainer container 98143267f658e92bf7b0e1bbaa366f57e79b3fbb43f226409d650361ad363160. Nov 8 00:03:14.037895 containerd[1483]: time="2025-11-08T00:03:14.036900377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hd7px,Uid:0fbaf294-4258-4835-9934-dab81661d270,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:03:14.054756 containerd[1483]: time="2025-11-08T00:03:14.054720428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z47bf,Uid:e91b386a-27b1-445a-9ac6-f764450d0c94,Namespace:kube-system,Attempt:0,} returns sandbox id \"98143267f658e92bf7b0e1bbaa366f57e79b3fbb43f226409d650361ad363160\"" Nov 8 00:03:14.060620 containerd[1483]: time="2025-11-08T00:03:14.060578117Z" level=info msg="CreateContainer within sandbox \"98143267f658e92bf7b0e1bbaa366f57e79b3fbb43f226409d650361ad363160\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:03:14.071930 containerd[1483]: time="2025-11-08T00:03:14.071452180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:14.071930 containerd[1483]: time="2025-11-08T00:03:14.071753927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:14.071930 containerd[1483]: time="2025-11-08T00:03:14.071770649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:14.072574 containerd[1483]: time="2025-11-08T00:03:14.072431989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:14.082705 containerd[1483]: time="2025-11-08T00:03:14.082657033Z" level=info msg="CreateContainer within sandbox \"98143267f658e92bf7b0e1bbaa366f57e79b3fbb43f226409d650361ad363160\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eed55bf4a6d9d07685dd0f320d98e0084300a296ee1537583494181e5cf0917d\"" Nov 8 00:03:14.084651 containerd[1483]: time="2025-11-08T00:03:14.084452915Z" level=info msg="StartContainer for \"eed55bf4a6d9d07685dd0f320d98e0084300a296ee1537583494181e5cf0917d\"" Nov 8 00:03:14.098263 systemd[1]: Started cri-containerd-2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f.scope - libcontainer container 2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f. Nov 8 00:03:14.125508 systemd[1]: Started cri-containerd-eed55bf4a6d9d07685dd0f320d98e0084300a296ee1537583494181e5cf0917d.scope - libcontainer container eed55bf4a6d9d07685dd0f320d98e0084300a296ee1537583494181e5cf0917d. Nov 8 00:03:14.160139 containerd[1483]: time="2025-11-08T00:03:14.159490178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hd7px,Uid:0fbaf294-4258-4835-9934-dab81661d270,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f\"" Nov 8 00:03:14.162601 containerd[1483]: time="2025-11-08T00:03:14.162564896Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:03:14.178053 containerd[1483]: time="2025-11-08T00:03:14.177994610Z" level=info msg="StartContainer for \"eed55bf4a6d9d07685dd0f320d98e0084300a296ee1537583494181e5cf0917d\" returns successfully" Nov 8 00:03:14.448082 kubelet[2577]: I1108 00:03:14.446926 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z47bf" podStartSLOduration=1.446904836 podStartE2EDuration="1.446904836s" podCreationTimestamp="2025-11-08 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:03:14.44673002 +0000 UTC m=+9.232307953" watchObservedRunningTime="2025-11-08 00:03:14.446904836 +0000 UTC m=+9.232482769" Nov 8 00:03:16.924166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990426310.mount: Deactivated successfully. Nov 8 00:03:20.387712 containerd[1483]: time="2025-11-08T00:03:20.386402677Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:20.387712 containerd[1483]: time="2025-11-08T00:03:20.387654060Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:03:20.388839 containerd[1483]: time="2025-11-08T00:03:20.388788953Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:20.396048 containerd[1483]: time="2025-11-08T00:03:20.395263484Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:20.397841 containerd[1483]: time="2025-11-08T00:03:20.397785850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 6.235008975s" Nov 8 00:03:20.398058 containerd[1483]: time="2025-11-08T00:03:20.398037951Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:03:20.404094 containerd[1483]: time="2025-11-08T00:03:20.404047523Z" level=info msg="CreateContainer within sandbox \"2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:03:20.424029 containerd[1483]: time="2025-11-08T00:03:20.423972716Z" level=info msg="CreateContainer within sandbox \"2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e\"" Nov 8 00:03:20.425417 containerd[1483]: time="2025-11-08T00:03:20.425244940Z" level=info msg="StartContainer for \"8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e\"" Nov 8 00:03:20.466183 systemd[1]: Started cri-containerd-8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e.scope - libcontainer container 8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e. Nov 8 00:03:20.494383 containerd[1483]: time="2025-11-08T00:03:20.494333841Z" level=info msg="StartContainer for \"8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e\" returns successfully" Nov 8 00:03:27.043970 sudo[1725]: pam_unix(sudo:session): session closed for user root Nov 8 00:03:27.196235 sshd[1707]: pam_unix(sshd:session): session closed for user core Nov 8 00:03:27.205068 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:03:27.206312 systemd[1]: sshd@6-46.224.42.7:22-139.178.68.195:49996.service: Deactivated successfully. Nov 8 00:03:27.211808 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:03:27.215173 systemd[1]: session-7.scope: Consumed 7.175s CPU time, 153.7M memory peak, 0B memory swap peak. Nov 8 00:03:27.219041 systemd-logind[1454]: Removed session 7. Nov 8 00:03:36.708726 kubelet[2577]: I1108 00:03:36.708658 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hd7px" podStartSLOduration=17.471787777 podStartE2EDuration="23.708571904s" podCreationTimestamp="2025-11-08 00:03:13 +0000 UTC" firstStartedPulling="2025-11-08 00:03:14.162088972 +0000 UTC m=+8.947666865" lastFinishedPulling="2025-11-08 00:03:20.398873099 +0000 UTC m=+15.184450992" observedRunningTime="2025-11-08 00:03:21.480429902 +0000 UTC m=+16.266007795" watchObservedRunningTime="2025-11-08 00:03:36.708571904 +0000 UTC m=+31.494149797" Nov 8 00:03:36.722289 systemd[1]: Created slice kubepods-besteffort-pod21dc7eac_86fd_43e6_b566_34327713fccf.slice - libcontainer container kubepods-besteffort-pod21dc7eac_86fd_43e6_b566_34327713fccf.slice. Nov 8 00:03:36.769650 kubelet[2577]: I1108 00:03:36.769590 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21dc7eac-86fd-43e6-b566-34327713fccf-tigera-ca-bundle\") pod \"calico-typha-7b745df75f-mhvdd\" (UID: \"21dc7eac-86fd-43e6-b566-34327713fccf\") " pod="calico-system/calico-typha-7b745df75f-mhvdd" Nov 8 00:03:36.769650 kubelet[2577]: I1108 00:03:36.769650 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ntd\" (UniqueName: \"kubernetes.io/projected/21dc7eac-86fd-43e6-b566-34327713fccf-kube-api-access-25ntd\") pod \"calico-typha-7b745df75f-mhvdd\" (UID: \"21dc7eac-86fd-43e6-b566-34327713fccf\") " pod="calico-system/calico-typha-7b745df75f-mhvdd" Nov 8 00:03:36.769650 kubelet[2577]: I1108 00:03:36.769670 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/21dc7eac-86fd-43e6-b566-34327713fccf-typha-certs\") pod \"calico-typha-7b745df75f-mhvdd\" (UID: \"21dc7eac-86fd-43e6-b566-34327713fccf\") " pod="calico-system/calico-typha-7b745df75f-mhvdd" Nov 8 00:03:36.944512 systemd[1]: Created slice kubepods-besteffort-pod265e73bd_2a05_4797_8346_4696ed1388ff.slice - libcontainer container kubepods-besteffort-pod265e73bd_2a05_4797_8346_4696ed1388ff.slice. Nov 8 00:03:36.971786 kubelet[2577]: I1108 00:03:36.971519 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-cni-log-dir\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.971786 kubelet[2577]: I1108 00:03:36.971568 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-flexvol-driver-host\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.971786 kubelet[2577]: I1108 00:03:36.971585 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/265e73bd-2a05-4797-8346-4696ed1388ff-node-certs\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.971786 kubelet[2577]: I1108 00:03:36.971625 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265e73bd-2a05-4797-8346-4696ed1388ff-tigera-ca-bundle\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.971786 kubelet[2577]: I1108 00:03:36.971697 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-cni-bin-dir\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972051 kubelet[2577]: I1108 00:03:36.971716 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-policysync\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972051 kubelet[2577]: I1108 00:03:36.971733 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbtw\" (UniqueName: \"kubernetes.io/projected/265e73bd-2a05-4797-8346-4696ed1388ff-kube-api-access-clbtw\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972051 kubelet[2577]: I1108 00:03:36.971752 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-lib-modules\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972051 kubelet[2577]: I1108 00:03:36.971794 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-var-run-calico\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972051 kubelet[2577]: I1108 00:03:36.971860 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-cni-net-dir\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972173 kubelet[2577]: I1108 00:03:36.971880 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-var-lib-calico\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:36.972173 kubelet[2577]: I1108 00:03:36.971895 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/265e73bd-2a05-4797-8346-4696ed1388ff-xtables-lock\") pod \"calico-node-s4f88\" (UID: \"265e73bd-2a05-4797-8346-4696ed1388ff\") " pod="calico-system/calico-node-s4f88" Nov 8 00:03:37.029113 containerd[1483]: time="2025-11-08T00:03:37.028968511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b745df75f-mhvdd,Uid:21dc7eac-86fd-43e6-b566-34327713fccf,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:37.058884 containerd[1483]: time="2025-11-08T00:03:37.058424823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:37.058884 containerd[1483]: time="2025-11-08T00:03:37.058484220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:37.058884 containerd[1483]: time="2025-11-08T00:03:37.058500099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:37.058884 containerd[1483]: time="2025-11-08T00:03:37.058731008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:37.078677 kubelet[2577]: E1108 00:03:37.078478 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.078677 kubelet[2577]: W1108 00:03:37.078607 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.078677 kubelet[2577]: E1108 00:03:37.078633 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.079739 kubelet[2577]: E1108 00:03:37.079117 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.079739 kubelet[2577]: W1108 00:03:37.079133 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.079739 kubelet[2577]: E1108 00:03:37.079147 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.079739 kubelet[2577]: E1108 00:03:37.079473 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.079739 kubelet[2577]: W1108 00:03:37.079482 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.079739 kubelet[2577]: E1108 00:03:37.079508 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.079739 kubelet[2577]: E1108 00:03:37.079727 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.079953 kubelet[2577]: W1108 00:03:37.079750 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.079953 kubelet[2577]: E1108 00:03:37.079761 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.086094 kubelet[2577]: E1108 00:03:37.084129 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.086094 kubelet[2577]: W1108 00:03:37.085460 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.086094 kubelet[2577]: E1108 00:03:37.085508 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.087186 kubelet[2577]: E1108 00:03:37.086963 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.087186 kubelet[2577]: W1108 00:03:37.086986 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.087186 kubelet[2577]: E1108 00:03:37.087009 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.095165 systemd[1]: Started cri-containerd-6681fb11e616ad7c84a739c64ed6c09043be18c9255e2e64a8a2daccb881ef52.scope - libcontainer container 6681fb11e616ad7c84a739c64ed6c09043be18c9255e2e64a8a2daccb881ef52. Nov 8 00:03:37.104118 kubelet[2577]: E1108 00:03:37.103919 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.104118 kubelet[2577]: W1108 00:03:37.104115 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.104118 kubelet[2577]: E1108 00:03:37.104140 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.138707 kubelet[2577]: E1108 00:03:37.138658 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:37.152083 kubelet[2577]: E1108 00:03:37.151663 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.152083 kubelet[2577]: W1108 00:03:37.151695 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.152083 kubelet[2577]: E1108 00:03:37.151720 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.153206 kubelet[2577]: E1108 00:03:37.152801 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.153206 kubelet[2577]: W1108 00:03:37.152903 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.153206 kubelet[2577]: E1108 00:03:37.152984 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.154249 kubelet[2577]: E1108 00:03:37.153501 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.154249 kubelet[2577]: W1108 00:03:37.153533 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.154249 kubelet[2577]: E1108 00:03:37.153548 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.155125 kubelet[2577]: E1108 00:03:37.154851 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.155125 kubelet[2577]: W1108 00:03:37.154871 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.155125 kubelet[2577]: E1108 00:03:37.154887 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.155513 kubelet[2577]: E1108 00:03:37.155310 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.155513 kubelet[2577]: W1108 00:03:37.155324 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.155513 kubelet[2577]: E1108 00:03:37.155350 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.155874 kubelet[2577]: E1108 00:03:37.155619 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.155874 kubelet[2577]: W1108 00:03:37.155630 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.155874 kubelet[2577]: E1108 00:03:37.155741 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.156705 kubelet[2577]: E1108 00:03:37.156661 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.158746 kubelet[2577]: W1108 00:03:37.157049 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.158746 kubelet[2577]: E1108 00:03:37.157079 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.158746 kubelet[2577]: E1108 00:03:37.158616 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.158746 kubelet[2577]: W1108 00:03:37.158632 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.158746 kubelet[2577]: E1108 00:03:37.158659 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.159341 kubelet[2577]: E1108 00:03:37.159309 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.159592 kubelet[2577]: W1108 00:03:37.159573 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.159689 kubelet[2577]: E1108 00:03:37.159677 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.160407 kubelet[2577]: E1108 00:03:37.160387 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.160524 kubelet[2577]: W1108 00:03:37.160509 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.160794 kubelet[2577]: E1108 00:03:37.160777 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.164174 kubelet[2577]: E1108 00:03:37.164132 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.164174 kubelet[2577]: W1108 00:03:37.164165 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.164335 kubelet[2577]: E1108 00:03:37.164191 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.164853 kubelet[2577]: E1108 00:03:37.164504 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.164853 kubelet[2577]: W1108 00:03:37.164522 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.164853 kubelet[2577]: E1108 00:03:37.164534 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.165238 kubelet[2577]: E1108 00:03:37.164993 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.166272 kubelet[2577]: W1108 00:03:37.166162 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.166272 kubelet[2577]: E1108 00:03:37.166207 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.166626 kubelet[2577]: E1108 00:03:37.166517 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.166626 kubelet[2577]: W1108 00:03:37.166534 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.166626 kubelet[2577]: E1108 00:03:37.166546 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.167000 kubelet[2577]: E1108 00:03:37.166891 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.167000 kubelet[2577]: W1108 00:03:37.166908 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.167000 kubelet[2577]: E1108 00:03:37.166920 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.167282 kubelet[2577]: E1108 00:03:37.167169 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.167282 kubelet[2577]: W1108 00:03:37.167186 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.167282 kubelet[2577]: E1108 00:03:37.167196 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.167450 kubelet[2577]: E1108 00:03:37.167380 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.167450 kubelet[2577]: W1108 00:03:37.167393 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.167450 kubelet[2577]: E1108 00:03:37.167402 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.167611 kubelet[2577]: E1108 00:03:37.167540 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.167611 kubelet[2577]: W1108 00:03:37.167551 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.167611 kubelet[2577]: E1108 00:03:37.167559 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.167762 kubelet[2577]: E1108 00:03:37.167685 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.167762 kubelet[2577]: W1108 00:03:37.167697 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.167762 kubelet[2577]: E1108 00:03:37.167705 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.168161 kubelet[2577]: E1108 00:03:37.168049 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.168161 kubelet[2577]: W1108 00:03:37.168067 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.168161 kubelet[2577]: E1108 00:03:37.168079 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.173546 containerd[1483]: time="2025-11-08T00:03:37.173505717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b745df75f-mhvdd,Uid:21dc7eac-86fd-43e6-b566-34327713fccf,Namespace:calico-system,Attempt:0,} returns sandbox id \"6681fb11e616ad7c84a739c64ed6c09043be18c9255e2e64a8a2daccb881ef52\"" Nov 8 00:03:37.178013 kubelet[2577]: E1108 00:03:37.177532 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.178013 kubelet[2577]: W1108 00:03:37.178006 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.178766 kubelet[2577]: E1108 00:03:37.178037 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.178766 kubelet[2577]: I1108 00:03:37.178066 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6a33abd5-ae6f-4042-bbab-6affce6535d7-socket-dir\") pod \"csi-node-driver-f6hbs\" (UID: \"6a33abd5-ae6f-4042-bbab-6affce6535d7\") " pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:37.179354 containerd[1483]: time="2025-11-08T00:03:37.179218411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:03:37.180514 kubelet[2577]: E1108 00:03:37.180486 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.180514 kubelet[2577]: W1108 00:03:37.180511 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.180654 kubelet[2577]: E1108 00:03:37.180533 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.180654 kubelet[2577]: I1108 00:03:37.180565 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a33abd5-ae6f-4042-bbab-6affce6535d7-kubelet-dir\") pod \"csi-node-driver-f6hbs\" (UID: \"6a33abd5-ae6f-4042-bbab-6affce6535d7\") " pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:37.180905 kubelet[2577]: E1108 00:03:37.180799 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.180905 kubelet[2577]: W1108 00:03:37.180868 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.180905 kubelet[2577]: E1108 00:03:37.180892 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.181196 kubelet[2577]: I1108 00:03:37.181048 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v25gh\" (UniqueName: \"kubernetes.io/projected/6a33abd5-ae6f-4042-bbab-6affce6535d7-kube-api-access-v25gh\") pod \"csi-node-driver-f6hbs\" (UID: \"6a33abd5-ae6f-4042-bbab-6affce6535d7\") " pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:37.181274 kubelet[2577]: E1108 00:03:37.181211 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.181274 kubelet[2577]: W1108 00:03:37.181220 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.181274 kubelet[2577]: E1108 00:03:37.181230 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.181746 kubelet[2577]: E1108 00:03:37.181676 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.181746 kubelet[2577]: W1108 00:03:37.181689 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.181746 kubelet[2577]: E1108 00:03:37.181700 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.182088 kubelet[2577]: E1108 00:03:37.181970 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.182088 kubelet[2577]: W1108 00:03:37.181981 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.182088 kubelet[2577]: E1108 00:03:37.181991 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.182469 kubelet[2577]: E1108 00:03:37.182266 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.182469 kubelet[2577]: W1108 00:03:37.182276 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.183138 kubelet[2577]: E1108 00:03:37.183012 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.185206 kubelet[2577]: E1108 00:03:37.185176 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.185206 kubelet[2577]: W1108 00:03:37.185202 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.185375 kubelet[2577]: E1108 00:03:37.185323 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.185407 kubelet[2577]: I1108 00:03:37.185372 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6a33abd5-ae6f-4042-bbab-6affce6535d7-varrun\") pod \"csi-node-driver-f6hbs\" (UID: \"6a33abd5-ae6f-4042-bbab-6affce6535d7\") " pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:37.185841 kubelet[2577]: E1108 00:03:37.185795 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.185841 kubelet[2577]: W1108 00:03:37.185832 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.186036 kubelet[2577]: E1108 00:03:37.185851 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.186527 kubelet[2577]: E1108 00:03:37.186493 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.186527 kubelet[2577]: W1108 00:03:37.186527 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.186628 kubelet[2577]: E1108 00:03:37.186542 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.186759 kubelet[2577]: E1108 00:03:37.186747 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.186786 kubelet[2577]: W1108 00:03:37.186767 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.186786 kubelet[2577]: E1108 00:03:37.186778 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.186854 kubelet[2577]: I1108 00:03:37.186817 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6a33abd5-ae6f-4042-bbab-6affce6535d7-registration-dir\") pod \"csi-node-driver-f6hbs\" (UID: \"6a33abd5-ae6f-4042-bbab-6affce6535d7\") " pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:37.187106 kubelet[2577]: E1108 00:03:37.187086 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.187106 kubelet[2577]: W1108 00:03:37.187105 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.187183 kubelet[2577]: E1108 00:03:37.187116 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.187315 kubelet[2577]: E1108 00:03:37.187303 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.187315 kubelet[2577]: W1108 00:03:37.187315 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.187547 kubelet[2577]: E1108 00:03:37.187324 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.187647 kubelet[2577]: E1108 00:03:37.187632 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.187678 kubelet[2577]: W1108 00:03:37.187647 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.187678 kubelet[2577]: E1108 00:03:37.187658 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.187857 kubelet[2577]: E1108 00:03:37.187845 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.187857 kubelet[2577]: W1108 00:03:37.187857 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.187913 kubelet[2577]: E1108 00:03:37.187866 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.250930 containerd[1483]: time="2025-11-08T00:03:37.250731449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s4f88,Uid:265e73bd-2a05-4797-8346-4696ed1388ff,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:37.289828 kubelet[2577]: E1108 00:03:37.289204 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.289828 kubelet[2577]: W1108 00:03:37.289234 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.289828 kubelet[2577]: E1108 00:03:37.289434 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.290396 containerd[1483]: time="2025-11-08T00:03:37.289504688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:37.290396 containerd[1483]: time="2025-11-08T00:03:37.289671920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:37.290396 containerd[1483]: time="2025-11-08T00:03:37.289787155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:37.290396 containerd[1483]: time="2025-11-08T00:03:37.289948947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:37.291249 kubelet[2577]: E1108 00:03:37.291040 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.291249 kubelet[2577]: W1108 00:03:37.291059 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.291249 kubelet[2577]: E1108 00:03:37.291078 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.292453 kubelet[2577]: E1108 00:03:37.292121 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.292453 kubelet[2577]: W1108 00:03:37.292138 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.292453 kubelet[2577]: E1108 00:03:37.292154 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.292885 kubelet[2577]: E1108 00:03:37.292664 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.292885 kubelet[2577]: W1108 00:03:37.292677 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.292885 kubelet[2577]: E1108 00:03:37.292689 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.294328 kubelet[2577]: E1108 00:03:37.294118 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.294328 kubelet[2577]: W1108 00:03:37.294134 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.294328 kubelet[2577]: E1108 00:03:37.294204 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.296223 kubelet[2577]: E1108 00:03:37.294680 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.296223 kubelet[2577]: W1108 00:03:37.294702 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.296223 kubelet[2577]: E1108 00:03:37.294714 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.296753 kubelet[2577]: E1108 00:03:37.296595 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.296753 kubelet[2577]: W1108 00:03:37.296607 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.296753 kubelet[2577]: E1108 00:03:37.296618 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.297214 kubelet[2577]: E1108 00:03:37.297079 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.297214 kubelet[2577]: W1108 00:03:37.297093 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.297214 kubelet[2577]: E1108 00:03:37.297105 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.297530 kubelet[2577]: E1108 00:03:37.297432 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.297530 kubelet[2577]: W1108 00:03:37.297443 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.297530 kubelet[2577]: E1108 00:03:37.297455 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.298232 kubelet[2577]: E1108 00:03:37.298056 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.298232 kubelet[2577]: W1108 00:03:37.298070 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.298232 kubelet[2577]: E1108 00:03:37.298082 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.299000 kubelet[2577]: E1108 00:03:37.298619 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.299000 kubelet[2577]: W1108 00:03:37.298633 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.299000 kubelet[2577]: E1108 00:03:37.298646 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.300027 kubelet[2577]: E1108 00:03:37.299670 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.300027 kubelet[2577]: W1108 00:03:37.299697 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.300027 kubelet[2577]: E1108 00:03:37.299710 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.300439 kubelet[2577]: E1108 00:03:37.300344 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.300654 kubelet[2577]: W1108 00:03:37.300590 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.301009 kubelet[2577]: E1108 00:03:37.300824 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.301869 kubelet[2577]: E1108 00:03:37.301630 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.301869 kubelet[2577]: W1108 00:03:37.301643 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.301869 kubelet[2577]: E1108 00:03:37.301654 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.302719 kubelet[2577]: E1108 00:03:37.302377 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.302719 kubelet[2577]: W1108 00:03:37.302390 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.302719 kubelet[2577]: E1108 00:03:37.302402 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.303735 kubelet[2577]: E1108 00:03:37.303472 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.303735 kubelet[2577]: W1108 00:03:37.303486 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.303735 kubelet[2577]: E1108 00:03:37.303498 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.305863 kubelet[2577]: E1108 00:03:37.305839 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.306108 kubelet[2577]: W1108 00:03:37.305931 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.306108 kubelet[2577]: E1108 00:03:37.305968 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.307495 kubelet[2577]: E1108 00:03:37.307308 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.307495 kubelet[2577]: W1108 00:03:37.307325 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.307495 kubelet[2577]: E1108 00:03:37.307343 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.308258 kubelet[2577]: E1108 00:03:37.308140 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.308828 kubelet[2577]: W1108 00:03:37.308533 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.308828 kubelet[2577]: E1108 00:03:37.308565 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.310734 kubelet[2577]: E1108 00:03:37.310348 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.310734 kubelet[2577]: W1108 00:03:37.310371 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.310734 kubelet[2577]: E1108 00:03:37.310389 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.312303 kubelet[2577]: E1108 00:03:37.311880 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.312303 kubelet[2577]: W1108 00:03:37.311900 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.312303 kubelet[2577]: E1108 00:03:37.311915 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.313144 kubelet[2577]: E1108 00:03:37.312966 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.313144 kubelet[2577]: W1108 00:03:37.312983 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.313144 kubelet[2577]: E1108 00:03:37.312998 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.316189 kubelet[2577]: E1108 00:03:37.316008 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.316189 kubelet[2577]: W1108 00:03:37.316032 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.316189 kubelet[2577]: E1108 00:03:37.316051 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.316387 kubelet[2577]: E1108 00:03:37.316375 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.316524 kubelet[2577]: W1108 00:03:37.316432 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.316524 kubelet[2577]: E1108 00:03:37.316446 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.319212 systemd[1]: Started cri-containerd-7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb.scope - libcontainer container 7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb. Nov 8 00:03:37.324352 kubelet[2577]: E1108 00:03:37.324303 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.324352 kubelet[2577]: W1108 00:03:37.324340 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.324555 kubelet[2577]: E1108 00:03:37.324494 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.338229 kubelet[2577]: E1108 00:03:37.338197 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:37.338229 kubelet[2577]: W1108 00:03:37.338220 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:37.338367 kubelet[2577]: E1108 00:03:37.338329 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:37.353434 containerd[1483]: time="2025-11-08T00:03:37.353191290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s4f88,Uid:265e73bd-2a05-4797-8346-4696ed1388ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\"" Nov 8 00:03:38.771398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915355143.mount: Deactivated successfully. Nov 8 00:03:39.365180 kubelet[2577]: E1108 00:03:39.363154 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:39.568036 containerd[1483]: time="2025-11-08T00:03:39.567969900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:39.569873 containerd[1483]: time="2025-11-08T00:03:39.569831905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:03:39.571554 containerd[1483]: time="2025-11-08T00:03:39.571494677Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:39.575410 containerd[1483]: time="2025-11-08T00:03:39.575121010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:39.576004 containerd[1483]: time="2025-11-08T00:03:39.575926778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.396364942s" Nov 8 00:03:39.576004 containerd[1483]: time="2025-11-08T00:03:39.575986415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:03:39.577982 containerd[1483]: time="2025-11-08T00:03:39.577954296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:03:39.596201 containerd[1483]: time="2025-11-08T00:03:39.596148918Z" level=info msg="CreateContainer within sandbox \"6681fb11e616ad7c84a739c64ed6c09043be18c9255e2e64a8a2daccb881ef52\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:03:39.616294 containerd[1483]: time="2025-11-08T00:03:39.616148788Z" level=info msg="CreateContainer within sandbox \"6681fb11e616ad7c84a739c64ed6c09043be18c9255e2e64a8a2daccb881ef52\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"360687f87514fb49b06e0cc35d285a677be899b6e159435f8b8f3f636e5875bf\"" Nov 8 00:03:39.618010 containerd[1483]: time="2025-11-08T00:03:39.617177906Z" level=info msg="StartContainer for \"360687f87514fb49b06e0cc35d285a677be899b6e159435f8b8f3f636e5875bf\"" Nov 8 00:03:39.654173 systemd[1]: Started cri-containerd-360687f87514fb49b06e0cc35d285a677be899b6e159435f8b8f3f636e5875bf.scope - libcontainer container 360687f87514fb49b06e0cc35d285a677be899b6e159435f8b8f3f636e5875bf. Nov 8 00:03:39.694344 containerd[1483]: time="2025-11-08T00:03:39.694299461Z" level=info msg="StartContainer for \"360687f87514fb49b06e0cc35d285a677be899b6e159435f8b8f3f636e5875bf\" returns successfully" Nov 8 00:03:40.529883 kubelet[2577]: I1108 00:03:40.529802 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b745df75f-mhvdd" podStartSLOduration=2.129283299 podStartE2EDuration="4.52978662s" podCreationTimestamp="2025-11-08 00:03:36 +0000 UTC" firstStartedPulling="2025-11-08 00:03:37.177129268 +0000 UTC m=+31.962707161" lastFinishedPulling="2025-11-08 00:03:39.577632509 +0000 UTC m=+34.363210482" observedRunningTime="2025-11-08 00:03:40.529410354 +0000 UTC m=+35.314988247" watchObservedRunningTime="2025-11-08 00:03:40.52978662 +0000 UTC m=+35.315364513" Nov 8 00:03:40.588874 kubelet[2577]: E1108 00:03:40.588718 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.588874 kubelet[2577]: W1108 00:03:40.588749 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.588874 kubelet[2577]: E1108 00:03:40.588772 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.589155 kubelet[2577]: E1108 00:03:40.589141 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.589262 kubelet[2577]: W1108 00:03:40.589210 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.589417 kubelet[2577]: E1108 00:03:40.589321 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.589523 kubelet[2577]: E1108 00:03:40.589511 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.589601 kubelet[2577]: W1108 00:03:40.589588 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.589676 kubelet[2577]: E1108 00:03:40.589664 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.590093 kubelet[2577]: E1108 00:03:40.590076 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.590319 kubelet[2577]: W1108 00:03:40.590173 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.590319 kubelet[2577]: E1108 00:03:40.590193 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.590491 kubelet[2577]: E1108 00:03:40.590474 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.590741 kubelet[2577]: W1108 00:03:40.590609 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.590741 kubelet[2577]: E1108 00:03:40.590630 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.590901 kubelet[2577]: E1108 00:03:40.590887 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.591014 kubelet[2577]: W1108 00:03:40.590999 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.591086 kubelet[2577]: E1108 00:03:40.591072 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.591440 kubelet[2577]: E1108 00:03:40.591322 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.591440 kubelet[2577]: W1108 00:03:40.591339 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.591440 kubelet[2577]: E1108 00:03:40.591356 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.591634 kubelet[2577]: E1108 00:03:40.591620 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.591764 kubelet[2577]: W1108 00:03:40.591747 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.591908 kubelet[2577]: E1108 00:03:40.591821 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.592224 kubelet[2577]: E1108 00:03:40.592207 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.592479 kubelet[2577]: W1108 00:03:40.592326 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.592479 kubelet[2577]: E1108 00:03:40.592400 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.592713 kubelet[2577]: E1108 00:03:40.592676 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.592713 kubelet[2577]: W1108 00:03:40.592687 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.592871 kubelet[2577]: E1108 00:03:40.592697 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.593137 kubelet[2577]: E1108 00:03:40.593124 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.593218 kubelet[2577]: W1108 00:03:40.593206 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.593340 kubelet[2577]: E1108 00:03:40.593258 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.593528 kubelet[2577]: E1108 00:03:40.593428 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.593528 kubelet[2577]: W1108 00:03:40.593438 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.593528 kubelet[2577]: E1108 00:03:40.593447 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.593715 kubelet[2577]: E1108 00:03:40.593702 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.593782 kubelet[2577]: W1108 00:03:40.593770 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.593836 kubelet[2577]: E1108 00:03:40.593827 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.594110 kubelet[2577]: E1108 00:03:40.594098 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.594260 kubelet[2577]: W1108 00:03:40.594164 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.594260 kubelet[2577]: E1108 00:03:40.594178 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.594387 kubelet[2577]: E1108 00:03:40.594377 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.594463 kubelet[2577]: W1108 00:03:40.594452 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.594524 kubelet[2577]: E1108 00:03:40.594514 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.621039 kubelet[2577]: E1108 00:03:40.620994 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.621039 kubelet[2577]: W1108 00:03:40.621020 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.621039 kubelet[2577]: E1108 00:03:40.621043 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.622181 kubelet[2577]: E1108 00:03:40.622146 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.622181 kubelet[2577]: W1108 00:03:40.622170 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.622181 kubelet[2577]: E1108 00:03:40.622185 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.622478 kubelet[2577]: E1108 00:03:40.622452 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.622478 kubelet[2577]: W1108 00:03:40.622463 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.622478 kubelet[2577]: E1108 00:03:40.622475 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.623039 kubelet[2577]: E1108 00:03:40.623022 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.623112 kubelet[2577]: W1108 00:03:40.623057 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.623112 kubelet[2577]: E1108 00:03:40.623069 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.624103 kubelet[2577]: E1108 00:03:40.624086 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.624103 kubelet[2577]: W1108 00:03:40.624101 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.624168 kubelet[2577]: E1108 00:03:40.624114 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.624325 kubelet[2577]: E1108 00:03:40.624313 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.624366 kubelet[2577]: W1108 00:03:40.624328 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.624366 kubelet[2577]: E1108 00:03:40.624338 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.624572 kubelet[2577]: E1108 00:03:40.624561 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.624572 kubelet[2577]: W1108 00:03:40.624572 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.624572 kubelet[2577]: E1108 00:03:40.624581 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.625670 kubelet[2577]: E1108 00:03:40.625649 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.625670 kubelet[2577]: W1108 00:03:40.625668 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.625749 kubelet[2577]: E1108 00:03:40.625680 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.625982 kubelet[2577]: E1108 00:03:40.625930 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.625982 kubelet[2577]: W1108 00:03:40.625977 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.626108 kubelet[2577]: E1108 00:03:40.625989 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.627141 kubelet[2577]: E1108 00:03:40.627118 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.627141 kubelet[2577]: W1108 00:03:40.627138 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.627263 kubelet[2577]: E1108 00:03:40.627150 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.627337 kubelet[2577]: E1108 00:03:40.627323 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.627337 kubelet[2577]: W1108 00:03:40.627335 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.627481 kubelet[2577]: E1108 00:03:40.627344 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.627615 kubelet[2577]: E1108 00:03:40.627602 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.627681 kubelet[2577]: W1108 00:03:40.627614 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.627878 kubelet[2577]: E1108 00:03:40.627864 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.628264 kubelet[2577]: E1108 00:03:40.628249 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.628264 kubelet[2577]: W1108 00:03:40.628263 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.628322 kubelet[2577]: E1108 00:03:40.628274 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.628889 kubelet[2577]: E1108 00:03:40.628876 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.628889 kubelet[2577]: W1108 00:03:40.628889 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.629196 kubelet[2577]: E1108 00:03:40.628900 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.629439 kubelet[2577]: E1108 00:03:40.629425 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.629485 kubelet[2577]: W1108 00:03:40.629439 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.629485 kubelet[2577]: E1108 00:03:40.629451 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.629822 kubelet[2577]: E1108 00:03:40.629787 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.629822 kubelet[2577]: W1108 00:03:40.629808 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.629822 kubelet[2577]: E1108 00:03:40.629822 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.630259 kubelet[2577]: E1108 00:03:40.630140 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.630259 kubelet[2577]: W1108 00:03:40.630154 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.630259 kubelet[2577]: E1108 00:03:40.630166 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:40.630490 kubelet[2577]: E1108 00:03:40.630476 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:03:40.630607 kubelet[2577]: W1108 00:03:40.630565 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:03:40.630607 kubelet[2577]: E1108 00:03:40.630584 2577 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:03:41.128435 containerd[1483]: time="2025-11-08T00:03:41.128378205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:41.130014 containerd[1483]: time="2025-11-08T00:03:41.129959269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:03:41.131704 containerd[1483]: time="2025-11-08T00:03:41.131595292Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:41.136258 containerd[1483]: time="2025-11-08T00:03:41.136024617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:41.137003 containerd[1483]: time="2025-11-08T00:03:41.136927146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.558611025s" Nov 8 00:03:41.137003 containerd[1483]: time="2025-11-08T00:03:41.136997343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:03:41.142372 containerd[1483]: time="2025-11-08T00:03:41.142317797Z" level=info msg="CreateContainer within sandbox \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:03:41.160197 containerd[1483]: time="2025-11-08T00:03:41.160138214Z" level=info msg="CreateContainer within sandbox \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949\"" Nov 8 00:03:41.161702 containerd[1483]: time="2025-11-08T00:03:41.160859789Z" level=info msg="StartContainer for \"06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949\"" Nov 8 00:03:41.198157 systemd[1]: Started cri-containerd-06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949.scope - libcontainer container 06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949. Nov 8 00:03:41.231988 containerd[1483]: time="2025-11-08T00:03:41.231907346Z" level=info msg="StartContainer for \"06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949\" returns successfully" Nov 8 00:03:41.251450 systemd[1]: cri-containerd-06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949.scope: Deactivated successfully. Nov 8 00:03:41.282964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949-rootfs.mount: Deactivated successfully. Nov 8 00:03:41.363485 kubelet[2577]: E1108 00:03:41.363432 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:41.386440 containerd[1483]: time="2025-11-08T00:03:41.385997719Z" level=info msg="shim disconnected" id=06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949 namespace=k8s.io Nov 8 00:03:41.386440 containerd[1483]: time="2025-11-08T00:03:41.386152834Z" level=warning msg="cleaning up after shim disconnected" id=06d3fddad1428667b81924e32f861d302564115ac4fe4c5a54c9d9dbee09a949 namespace=k8s.io Nov 8 00:03:41.386440 containerd[1483]: time="2025-11-08T00:03:41.386178913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:03:41.518974 containerd[1483]: time="2025-11-08T00:03:41.518736879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:03:43.367117 kubelet[2577]: E1108 00:03:43.366780 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:44.945992 containerd[1483]: time="2025-11-08T00:03:44.945540372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:44.947810 containerd[1483]: time="2025-11-08T00:03:44.947763991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:03:44.948096 containerd[1483]: time="2025-11-08T00:03:44.948029744Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:44.955085 containerd[1483]: time="2025-11-08T00:03:44.955028234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:44.956192 containerd[1483]: time="2025-11-08T00:03:44.956059365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.437279128s" Nov 8 00:03:44.956192 containerd[1483]: time="2025-11-08T00:03:44.956098244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:03:44.961467 containerd[1483]: time="2025-11-08T00:03:44.961405620Z" level=info msg="CreateContainer within sandbox \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:03:44.980537 containerd[1483]: time="2025-11-08T00:03:44.980473380Z" level=info msg="CreateContainer within sandbox \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec\"" Nov 8 00:03:44.981134 containerd[1483]: time="2025-11-08T00:03:44.981106403Z" level=info msg="StartContainer for \"c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec\"" Nov 8 00:03:45.019306 systemd[1]: Started cri-containerd-c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec.scope - libcontainer container c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec. Nov 8 00:03:45.053632 containerd[1483]: time="2025-11-08T00:03:45.053388840Z" level=info msg="StartContainer for \"c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec\" returns successfully" Nov 8 00:03:45.363052 kubelet[2577]: E1108 00:03:45.362375 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:45.606726 containerd[1483]: time="2025-11-08T00:03:45.606658742Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:03:45.609657 systemd[1]: cri-containerd-c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec.scope: Deactivated successfully. Nov 8 00:03:45.631611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec-rootfs.mount: Deactivated successfully. Nov 8 00:03:45.686125 kubelet[2577]: I1108 00:03:45.686091 2577 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:03:45.725023 containerd[1483]: time="2025-11-08T00:03:45.724894767Z" level=info msg="shim disconnected" id=c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec namespace=k8s.io Nov 8 00:03:45.725023 containerd[1483]: time="2025-11-08T00:03:45.724986204Z" level=warning msg="cleaning up after shim disconnected" id=c62587770030eb6952bee916bda4922af85ec4cf397021bc986e73de7f93d0ec namespace=k8s.io Nov 8 00:03:45.725023 containerd[1483]: time="2025-11-08T00:03:45.725009724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:03:45.748094 systemd[1]: Created slice kubepods-burstable-pod6b199b53_44ba_445d_8690_b906dab10cbb.slice - libcontainer container kubepods-burstable-pod6b199b53_44ba_445d_8690_b906dab10cbb.slice. Nov 8 00:03:45.755913 kubelet[2577]: I1108 00:03:45.754528 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jntp\" (UniqueName: \"kubernetes.io/projected/6b199b53-44ba-445d-8690-b906dab10cbb-kube-api-access-8jntp\") pod \"coredns-674b8bbfcf-q8wbq\" (UID: \"6b199b53-44ba-445d-8690-b906dab10cbb\") " pod="kube-system/coredns-674b8bbfcf-q8wbq" Nov 8 00:03:45.755913 kubelet[2577]: I1108 00:03:45.754579 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b199b53-44ba-445d-8690-b906dab10cbb-config-volume\") pod \"coredns-674b8bbfcf-q8wbq\" (UID: \"6b199b53-44ba-445d-8690-b906dab10cbb\") " pod="kube-system/coredns-674b8bbfcf-q8wbq" Nov 8 00:03:45.770051 systemd[1]: Created slice kubepods-besteffort-podeb179d1d_66bd_4e35_9424_06cc17e8420e.slice - libcontainer container kubepods-besteffort-podeb179d1d_66bd_4e35_9424_06cc17e8420e.slice. Nov 8 00:03:45.793069 systemd[1]: Created slice kubepods-besteffort-podd7c8d02f_ab3d_4412_bfb2_5d9f4d613dd9.slice - libcontainer container kubepods-besteffort-podd7c8d02f_ab3d_4412_bfb2_5d9f4d613dd9.slice. Nov 8 00:03:45.801358 systemd[1]: Created slice kubepods-besteffort-pode4fb1541_9aa6_48b8_aaf8_151e44fc4a0d.slice - libcontainer container kubepods-besteffort-pode4fb1541_9aa6_48b8_aaf8_151e44fc4a0d.slice. Nov 8 00:03:45.812302 systemd[1]: Created slice kubepods-besteffort-pod36248f5d_e7be_4c9e_8bf1_2e53872f633b.slice - libcontainer container kubepods-besteffort-pod36248f5d_e7be_4c9e_8bf1_2e53872f633b.slice. Nov 8 00:03:45.822405 systemd[1]: Created slice kubepods-burstable-pod879786e9_e895_409c_b334_437a5736f56f.slice - libcontainer container kubepods-burstable-pod879786e9_e895_409c_b334_437a5736f56f.slice. Nov 8 00:03:45.833335 systemd[1]: Created slice kubepods-besteffort-pod8027ad8b_f646_4861_aed8_35b2e3d85698.slice - libcontainer container kubepods-besteffort-pod8027ad8b_f646_4861_aed8_35b2e3d85698.slice. Nov 8 00:03:45.856023 kubelet[2577]: I1108 00:03:45.855922 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9-tigera-ca-bundle\") pod \"calico-kube-controllers-6c87cb4cfb-m9pm4\" (UID: \"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9\") " pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" Nov 8 00:03:45.856184 kubelet[2577]: I1108 00:03:45.856034 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgh2h\" (UniqueName: \"kubernetes.io/projected/d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9-kube-api-access-vgh2h\") pod \"calico-kube-controllers-6c87cb4cfb-m9pm4\" (UID: \"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9\") " pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" Nov 8 00:03:45.856184 kubelet[2577]: I1108 00:03:45.856072 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8027ad8b-f646-4861-aed8-35b2e3d85698-config\") pod \"goldmane-666569f655-cxpqj\" (UID: \"8027ad8b-f646-4861-aed8-35b2e3d85698\") " pod="calico-system/goldmane-666569f655-cxpqj" Nov 8 00:03:45.856184 kubelet[2577]: I1108 00:03:45.856107 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-backend-key-pair\") pod \"whisker-57bfc8bc85-4x5vn\" (UID: \"eb179d1d-66bd-4e35-9424-06cc17e8420e\") " pod="calico-system/whisker-57bfc8bc85-4x5vn" Nov 8 00:03:45.856184 kubelet[2577]: I1108 00:03:45.856147 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/879786e9-e895-409c-b334-437a5736f56f-config-volume\") pod \"coredns-674b8bbfcf-tzfvr\" (UID: \"879786e9-e895-409c-b334-437a5736f56f\") " pod="kube-system/coredns-674b8bbfcf-tzfvr" Nov 8 00:03:45.856184 kubelet[2577]: I1108 00:03:45.856177 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv626\" (UniqueName: \"kubernetes.io/projected/eb179d1d-66bd-4e35-9424-06cc17e8420e-kube-api-access-xv626\") pod \"whisker-57bfc8bc85-4x5vn\" (UID: \"eb179d1d-66bd-4e35-9424-06cc17e8420e\") " pod="calico-system/whisker-57bfc8bc85-4x5vn" Nov 8 00:03:45.856370 kubelet[2577]: I1108 00:03:45.856211 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpkcg\" (UniqueName: \"kubernetes.io/projected/879786e9-e895-409c-b334-437a5736f56f-kube-api-access-bpkcg\") pod \"coredns-674b8bbfcf-tzfvr\" (UID: \"879786e9-e895-409c-b334-437a5736f56f\") " pod="kube-system/coredns-674b8bbfcf-tzfvr" Nov 8 00:03:45.856370 kubelet[2577]: I1108 00:03:45.856244 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36248f5d-e7be-4c9e-8bf1-2e53872f633b-calico-apiserver-certs\") pod \"calico-apiserver-5bbbbfdffc-b22vr\" (UID: \"36248f5d-e7be-4c9e-8bf1-2e53872f633b\") " pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" Nov 8 00:03:45.856370 kubelet[2577]: I1108 00:03:45.856314 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8027ad8b-f646-4861-aed8-35b2e3d85698-goldmane-key-pair\") pod \"goldmane-666569f655-cxpqj\" (UID: \"8027ad8b-f646-4861-aed8-35b2e3d85698\") " pod="calico-system/goldmane-666569f655-cxpqj" Nov 8 00:03:45.856442 kubelet[2577]: I1108 00:03:45.856367 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnlj6\" (UniqueName: \"kubernetes.io/projected/8027ad8b-f646-4861-aed8-35b2e3d85698-kube-api-access-bnlj6\") pod \"goldmane-666569f655-cxpqj\" (UID: \"8027ad8b-f646-4861-aed8-35b2e3d85698\") " pod="calico-system/goldmane-666569f655-cxpqj" Nov 8 00:03:45.856442 kubelet[2577]: I1108 00:03:45.856406 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d-calico-apiserver-certs\") pod \"calico-apiserver-5bbbbfdffc-6m8tj\" (UID: \"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d\") " pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" Nov 8 00:03:45.856511 kubelet[2577]: I1108 00:03:45.856436 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wbvt\" (UniqueName: \"kubernetes.io/projected/e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d-kube-api-access-8wbvt\") pod \"calico-apiserver-5bbbbfdffc-6m8tj\" (UID: \"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d\") " pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" Nov 8 00:03:45.856511 kubelet[2577]: I1108 00:03:45.856491 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8027ad8b-f646-4861-aed8-35b2e3d85698-goldmane-ca-bundle\") pod \"goldmane-666569f655-cxpqj\" (UID: \"8027ad8b-f646-4861-aed8-35b2e3d85698\") " pod="calico-system/goldmane-666569f655-cxpqj" Nov 8 00:03:45.856565 kubelet[2577]: I1108 00:03:45.856535 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlmn\" (UniqueName: \"kubernetes.io/projected/36248f5d-e7be-4c9e-8bf1-2e53872f633b-kube-api-access-pzlmn\") pod \"calico-apiserver-5bbbbfdffc-b22vr\" (UID: \"36248f5d-e7be-4c9e-8bf1-2e53872f633b\") " pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" Nov 8 00:03:45.856588 kubelet[2577]: I1108 00:03:45.856575 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-ca-bundle\") pod \"whisker-57bfc8bc85-4x5vn\" (UID: \"eb179d1d-66bd-4e35-9424-06cc17e8420e\") " pod="calico-system/whisker-57bfc8bc85-4x5vn" Nov 8 00:03:46.058285 containerd[1483]: time="2025-11-08T00:03:46.058186585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q8wbq,Uid:6b199b53-44ba-445d-8690-b906dab10cbb,Namespace:kube-system,Attempt:0,}" Nov 8 00:03:46.082983 containerd[1483]: time="2025-11-08T00:03:46.082876309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bfc8bc85-4x5vn,Uid:eb179d1d-66bd-4e35-9424-06cc17e8420e,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:46.098989 containerd[1483]: time="2025-11-08T00:03:46.098941348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c87cb4cfb-m9pm4,Uid:d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:46.108103 containerd[1483]: time="2025-11-08T00:03:46.108062503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-6m8tj,Uid:e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:03:46.118839 containerd[1483]: time="2025-11-08T00:03:46.118771542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-b22vr,Uid:36248f5d-e7be-4c9e-8bf1-2e53872f633b,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:03:46.128711 containerd[1483]: time="2025-11-08T00:03:46.128558122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzfvr,Uid:879786e9-e895-409c-b334-437a5736f56f,Namespace:kube-system,Attempt:0,}" Nov 8 00:03:46.139726 containerd[1483]: time="2025-11-08T00:03:46.139441077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cxpqj,Uid:8027ad8b-f646-4861-aed8-35b2e3d85698,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:46.245622 containerd[1483]: time="2025-11-08T00:03:46.245563530Z" level=error msg="Failed to destroy network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.247067 containerd[1483]: time="2025-11-08T00:03:46.246719704Z" level=error msg="encountered an error cleaning up failed sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.247067 containerd[1483]: time="2025-11-08T00:03:46.246815862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q8wbq,Uid:6b199b53-44ba-445d-8690-b906dab10cbb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.247239 kubelet[2577]: E1108 00:03:46.247075 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.247239 kubelet[2577]: E1108 00:03:46.247143 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q8wbq" Nov 8 00:03:46.247239 kubelet[2577]: E1108 00:03:46.247163 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q8wbq" Nov 8 00:03:46.247440 kubelet[2577]: E1108 00:03:46.247223 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q8wbq_kube-system(6b199b53-44ba-445d-8690-b906dab10cbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q8wbq_kube-system(6b199b53-44ba-445d-8690-b906dab10cbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q8wbq" podUID="6b199b53-44ba-445d-8690-b906dab10cbb" Nov 8 00:03:46.264591 containerd[1483]: time="2025-11-08T00:03:46.264358547Z" level=error msg="Failed to destroy network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.266077 containerd[1483]: time="2025-11-08T00:03:46.265908472Z" level=error msg="encountered an error cleaning up failed sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.266077 containerd[1483]: time="2025-11-08T00:03:46.266022829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-6m8tj,Uid:e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.266704 kubelet[2577]: E1108 00:03:46.266255 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.266704 kubelet[2577]: E1108 00:03:46.266333 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" Nov 8 00:03:46.266704 kubelet[2577]: E1108 00:03:46.266357 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" Nov 8 00:03:46.267211 kubelet[2577]: E1108 00:03:46.266423 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:03:46.275079 containerd[1483]: time="2025-11-08T00:03:46.274922509Z" level=error msg="Failed to destroy network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.277119 containerd[1483]: time="2025-11-08T00:03:46.277002463Z" level=error msg="encountered an error cleaning up failed sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.277324 containerd[1483]: time="2025-11-08T00:03:46.277093740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bfc8bc85-4x5vn,Uid:eb179d1d-66bd-4e35-9424-06cc17e8420e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.278023 kubelet[2577]: E1108 00:03:46.277614 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.278023 kubelet[2577]: E1108 00:03:46.277671 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57bfc8bc85-4x5vn" Nov 8 00:03:46.278023 kubelet[2577]: E1108 00:03:46.277692 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57bfc8bc85-4x5vn" Nov 8 00:03:46.278192 kubelet[2577]: E1108 00:03:46.277750 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57bfc8bc85-4x5vn_calico-system(eb179d1d-66bd-4e35-9424-06cc17e8420e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57bfc8bc85-4x5vn_calico-system(eb179d1d-66bd-4e35-9424-06cc17e8420e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57bfc8bc85-4x5vn" podUID="eb179d1d-66bd-4e35-9424-06cc17e8420e" Nov 8 00:03:46.289294 containerd[1483]: time="2025-11-08T00:03:46.289191748Z" level=error msg="Failed to destroy network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.289884 containerd[1483]: time="2025-11-08T00:03:46.289722456Z" level=error msg="encountered an error cleaning up failed sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.289884 containerd[1483]: time="2025-11-08T00:03:46.289783815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c87cb4cfb-m9pm4,Uid:d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.290057 kubelet[2577]: E1108 00:03:46.290014 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.290103 kubelet[2577]: E1108 00:03:46.290069 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" Nov 8 00:03:46.290103 kubelet[2577]: E1108 00:03:46.290089 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" Nov 8 00:03:46.290259 kubelet[2577]: E1108 00:03:46.290136 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:03:46.310211 containerd[1483]: time="2025-11-08T00:03:46.310072439Z" level=error msg="Failed to destroy network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.311062 containerd[1483]: time="2025-11-08T00:03:46.310408631Z" level=error msg="encountered an error cleaning up failed sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.311062 containerd[1483]: time="2025-11-08T00:03:46.310475870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzfvr,Uid:879786e9-e895-409c-b334-437a5736f56f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.311567 kubelet[2577]: E1108 00:03:46.310680 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.311567 kubelet[2577]: E1108 00:03:46.310734 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tzfvr" Nov 8 00:03:46.311567 kubelet[2577]: E1108 00:03:46.310753 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tzfvr" Nov 8 00:03:46.311683 kubelet[2577]: E1108 00:03:46.310810 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tzfvr_kube-system(879786e9-e895-409c-b334-437a5736f56f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tzfvr_kube-system(879786e9-e895-409c-b334-437a5736f56f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tzfvr" podUID="879786e9-e895-409c-b334-437a5736f56f" Nov 8 00:03:46.342951 containerd[1483]: time="2025-11-08T00:03:46.342619627Z" level=error msg="Failed to destroy network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.343066 containerd[1483]: time="2025-11-08T00:03:46.343031977Z" level=error msg="encountered an error cleaning up failed sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.343123 containerd[1483]: time="2025-11-08T00:03:46.343089136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-b22vr,Uid:36248f5d-e7be-4c9e-8bf1-2e53872f633b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.343946 kubelet[2577]: E1108 00:03:46.343333 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.343946 kubelet[2577]: E1108 00:03:46.343398 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" Nov 8 00:03:46.343946 kubelet[2577]: E1108 00:03:46.343420 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" Nov 8 00:03:46.344105 kubelet[2577]: E1108 00:03:46.343496 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:03:46.345617 containerd[1483]: time="2025-11-08T00:03:46.345557560Z" level=error msg="Failed to destroy network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.346164 containerd[1483]: time="2025-11-08T00:03:46.346113388Z" level=error msg="encountered an error cleaning up failed sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.346215 containerd[1483]: time="2025-11-08T00:03:46.346171227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cxpqj,Uid:8027ad8b-f646-4861-aed8-35b2e3d85698,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.346466 kubelet[2577]: E1108 00:03:46.346408 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.346528 kubelet[2577]: E1108 00:03:46.346488 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cxpqj" Nov 8 00:03:46.346528 kubelet[2577]: E1108 00:03:46.346515 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cxpqj" Nov 8 00:03:46.346676 kubelet[2577]: E1108 00:03:46.346647 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cxpqj_calico-system(8027ad8b-f646-4861-aed8-35b2e3d85698)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cxpqj_calico-system(8027ad8b-f646-4861-aed8-35b2e3d85698)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:03:46.545865 kubelet[2577]: I1108 00:03:46.545793 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:03:46.549122 containerd[1483]: time="2025-11-08T00:03:46.548138884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:03:46.549122 containerd[1483]: time="2025-11-08T00:03:46.548278080Z" level=info msg="StopPodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\"" Nov 8 00:03:46.550069 containerd[1483]: time="2025-11-08T00:03:46.549633490Z" level=info msg="Ensure that sandbox bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a in task-service has been cleanup successfully" Nov 8 00:03:46.565309 kubelet[2577]: I1108 00:03:46.565209 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:03:46.569830 containerd[1483]: time="2025-11-08T00:03:46.569676239Z" level=info msg="StopPodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\"" Nov 8 00:03:46.569983 containerd[1483]: time="2025-11-08T00:03:46.569879635Z" level=info msg="Ensure that sandbox 46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed in task-service has been cleanup successfully" Nov 8 00:03:46.573163 kubelet[2577]: I1108 00:03:46.572525 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:03:46.574202 containerd[1483]: time="2025-11-08T00:03:46.574167458Z" level=info msg="StopPodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\"" Nov 8 00:03:46.578732 containerd[1483]: time="2025-11-08T00:03:46.578509600Z" level=info msg="Ensure that sandbox a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8 in task-service has been cleanup successfully" Nov 8 00:03:46.587056 kubelet[2577]: I1108 00:03:46.585610 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:03:46.589327 containerd[1483]: time="2025-11-08T00:03:46.589292878Z" level=info msg="StopPodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\"" Nov 8 00:03:46.589698 containerd[1483]: time="2025-11-08T00:03:46.589672069Z" level=info msg="Ensure that sandbox 8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3 in task-service has been cleanup successfully" Nov 8 00:03:46.591891 kubelet[2577]: I1108 00:03:46.591864 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:03:46.595742 containerd[1483]: time="2025-11-08T00:03:46.595535977Z" level=info msg="StopPodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\"" Nov 8 00:03:46.597964 containerd[1483]: time="2025-11-08T00:03:46.597769407Z" level=info msg="Ensure that sandbox 9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c in task-service has been cleanup successfully" Nov 8 00:03:46.603890 kubelet[2577]: I1108 00:03:46.603145 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:03:46.604672 containerd[1483]: time="2025-11-08T00:03:46.604609293Z" level=info msg="StopPodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\"" Nov 8 00:03:46.605738 containerd[1483]: time="2025-11-08T00:03:46.605630910Z" level=info msg="Ensure that sandbox 599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006 in task-service has been cleanup successfully" Nov 8 00:03:46.615992 kubelet[2577]: I1108 00:03:46.615698 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:03:46.618228 containerd[1483]: time="2025-11-08T00:03:46.618190508Z" level=info msg="StopPodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\"" Nov 8 00:03:46.619949 containerd[1483]: time="2025-11-08T00:03:46.619820551Z" level=info msg="Ensure that sandbox c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469 in task-service has been cleanup successfully" Nov 8 00:03:46.654493 containerd[1483]: time="2025-11-08T00:03:46.654268496Z" level=error msg="StopPodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" failed" error="failed to destroy network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.654917 kubelet[2577]: E1108 00:03:46.654880 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:03:46.655355 kubelet[2577]: E1108 00:03:46.655147 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a"} Nov 8 00:03:46.655355 kubelet[2577]: E1108 00:03:46.655328 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36248f5d-e7be-4c9e-8bf1-2e53872f633b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.655576 kubelet[2577]: E1108 00:03:46.655528 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36248f5d-e7be-4c9e-8bf1-2e53872f633b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:03:46.697983 containerd[1483]: time="2025-11-08T00:03:46.697917835Z" level=error msg="StopPodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" failed" error="failed to destroy network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.699043 kubelet[2577]: E1108 00:03:46.698513 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:03:46.699043 kubelet[2577]: E1108 00:03:46.698567 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed"} Nov 8 00:03:46.699043 kubelet[2577]: E1108 00:03:46.698896 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:03:46.699043 kubelet[2577]: E1108 00:03:46.698925 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006"} Nov 8 00:03:46.699043 kubelet[2577]: E1108 00:03:46.698985 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb179d1d-66bd-4e35-9424-06cc17e8420e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.699324 containerd[1483]: time="2025-11-08T00:03:46.698706217Z" level=error msg="StopPodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" failed" error="failed to destroy network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.699356 kubelet[2577]: E1108 00:03:46.699009 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb179d1d-66bd-4e35-9424-06cc17e8420e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57bfc8bc85-4x5vn" podUID="eb179d1d-66bd-4e35-9424-06cc17e8420e" Nov 8 00:03:46.699678 kubelet[2577]: E1108 00:03:46.699456 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.699678 kubelet[2577]: E1108 00:03:46.699599 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:03:46.705273 containerd[1483]: time="2025-11-08T00:03:46.705203711Z" level=error msg="StopPodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" failed" error="failed to destroy network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.705728 kubelet[2577]: E1108 00:03:46.705469 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:03:46.705728 kubelet[2577]: E1108 00:03:46.705522 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c"} Nov 8 00:03:46.705728 kubelet[2577]: E1108 00:03:46.705559 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.705728 kubelet[2577]: E1108 00:03:46.705582 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:03:46.708479 containerd[1483]: time="2025-11-08T00:03:46.707872371Z" level=error msg="StopPodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" failed" error="failed to destroy network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.708604 kubelet[2577]: E1108 00:03:46.708235 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:03:46.708604 kubelet[2577]: E1108 00:03:46.708308 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3"} Nov 8 00:03:46.708604 kubelet[2577]: E1108 00:03:46.708345 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"879786e9-e895-409c-b334-437a5736f56f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.708604 kubelet[2577]: E1108 00:03:46.708384 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"879786e9-e895-409c-b334-437a5736f56f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tzfvr" podUID="879786e9-e895-409c-b334-437a5736f56f" Nov 8 00:03:46.711809 containerd[1483]: time="2025-11-08T00:03:46.711256334Z" level=error msg="StopPodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" failed" error="failed to destroy network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.712097 kubelet[2577]: E1108 00:03:46.711552 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:03:46.712097 kubelet[2577]: E1108 00:03:46.711601 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8"} Nov 8 00:03:46.712097 kubelet[2577]: E1108 00:03:46.711635 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8027ad8b-f646-4861-aed8-35b2e3d85698\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.712097 kubelet[2577]: E1108 00:03:46.711664 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8027ad8b-f646-4861-aed8-35b2e3d85698\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:03:46.718554 containerd[1483]: time="2025-11-08T00:03:46.718398774Z" level=error msg="StopPodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" failed" error="failed to destroy network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:46.719526 kubelet[2577]: E1108 00:03:46.719175 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:03:46.719526 kubelet[2577]: E1108 00:03:46.719249 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469"} Nov 8 00:03:46.719526 kubelet[2577]: E1108 00:03:46.719306 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b199b53-44ba-445d-8690-b906dab10cbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:46.719526 kubelet[2577]: E1108 00:03:46.719343 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b199b53-44ba-445d-8690-b906dab10cbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q8wbq" podUID="6b199b53-44ba-445d-8690-b906dab10cbb" Nov 8 00:03:47.370411 systemd[1]: Created slice kubepods-besteffort-pod6a33abd5_ae6f_4042_bbab_6affce6535d7.slice - libcontainer container kubepods-besteffort-pod6a33abd5_ae6f_4042_bbab_6affce6535d7.slice. Nov 8 00:03:47.373370 containerd[1483]: time="2025-11-08T00:03:47.373324482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hbs,Uid:6a33abd5-ae6f-4042-bbab-6affce6535d7,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:47.436286 containerd[1483]: time="2025-11-08T00:03:47.436210210Z" level=error msg="Failed to destroy network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:47.438313 containerd[1483]: time="2025-11-08T00:03:47.438249048Z" level=error msg="encountered an error cleaning up failed sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:47.438458 containerd[1483]: time="2025-11-08T00:03:47.438330967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hbs,Uid:6a33abd5-ae6f-4042-bbab-6affce6535d7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:47.438679 kubelet[2577]: E1108 00:03:47.438615 2577 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:47.439035 kubelet[2577]: E1108 00:03:47.438697 2577 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:47.439035 kubelet[2577]: E1108 00:03:47.438721 2577 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6hbs" Nov 8 00:03:47.439035 kubelet[2577]: E1108 00:03:47.438770 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:47.440549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd-shm.mount: Deactivated successfully. Nov 8 00:03:47.620135 kubelet[2577]: I1108 00:03:47.620075 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:03:47.621209 containerd[1483]: time="2025-11-08T00:03:47.621103469Z" level=info msg="StopPodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\"" Nov 8 00:03:47.623433 containerd[1483]: time="2025-11-08T00:03:47.621359904Z" level=info msg="Ensure that sandbox ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd in task-service has been cleanup successfully" Nov 8 00:03:47.652039 containerd[1483]: time="2025-11-08T00:03:47.651901166Z" level=error msg="StopPodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" failed" error="failed to destroy network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:03:47.652565 kubelet[2577]: E1108 00:03:47.652509 2577 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:03:47.652671 kubelet[2577]: E1108 00:03:47.652572 2577 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd"} Nov 8 00:03:47.652671 kubelet[2577]: E1108 00:03:47.652614 2577 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a33abd5-ae6f-4042-bbab-6affce6535d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:03:47.652671 kubelet[2577]: E1108 00:03:47.652639 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a33abd5-ae6f-4042-bbab-6affce6535d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:03:53.005080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558249051.mount: Deactivated successfully. Nov 8 00:03:53.041194 containerd[1483]: time="2025-11-08T00:03:53.040415714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:53.041827 containerd[1483]: time="2025-11-08T00:03:53.041793663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:03:53.043749 containerd[1483]: time="2025-11-08T00:03:53.043712847Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:53.046584 containerd[1483]: time="2025-11-08T00:03:53.046522305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:03:53.047598 containerd[1483]: time="2025-11-08T00:03:53.047556896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.499376013s" Nov 8 00:03:53.047598 containerd[1483]: time="2025-11-08T00:03:53.047594496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:03:53.069973 containerd[1483]: time="2025-11-08T00:03:53.069881557Z" level=info msg="CreateContainer within sandbox \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:03:53.090054 containerd[1483]: time="2025-11-08T00:03:53.089984915Z" level=info msg="CreateContainer within sandbox \"7b8d6a0a70a6a969a15466c0e2d5e96501d41a3d16fb54cb66ba4f8daf7fa2cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24\"" Nov 8 00:03:53.092978 containerd[1483]: time="2025-11-08T00:03:53.091289944Z" level=info msg="StartContainer for \"8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24\"" Nov 8 00:03:53.129546 systemd[1]: Started cri-containerd-8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24.scope - libcontainer container 8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24. Nov 8 00:03:53.200411 containerd[1483]: time="2025-11-08T00:03:53.200353865Z" level=info msg="StartContainer for \"8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24\" returns successfully" Nov 8 00:03:53.337615 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:03:53.337804 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:03:53.508546 containerd[1483]: time="2025-11-08T00:03:53.508488462Z" level=info msg="StopPodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\"" Nov 8 00:03:53.677441 kubelet[2577]: I1108 00:03:53.676155 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s4f88" podStartSLOduration=1.982518287 podStartE2EDuration="17.676138631s" podCreationTimestamp="2025-11-08 00:03:36 +0000 UTC" firstStartedPulling="2025-11-08 00:03:37.355572379 +0000 UTC m=+32.141150272" lastFinishedPulling="2025-11-08 00:03:53.049192723 +0000 UTC m=+47.834770616" observedRunningTime="2025-11-08 00:03:53.675876833 +0000 UTC m=+48.461454686" watchObservedRunningTime="2025-11-08 00:03:53.676138631 +0000 UTC m=+48.461716524" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.615 [INFO][3788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.616 [INFO][3788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" iface="eth0" netns="/var/run/netns/cni-c80aa603-fd50-0846-4242-12f44f2a26ae" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.616 [INFO][3788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" iface="eth0" netns="/var/run/netns/cni-c80aa603-fd50-0846-4242-12f44f2a26ae" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.616 [INFO][3788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" iface="eth0" netns="/var/run/netns/cni-c80aa603-fd50-0846-4242-12f44f2a26ae" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.616 [INFO][3788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.617 [INFO][3788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.703 [INFO][3796] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.703 [INFO][3796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.703 [INFO][3796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.718 [WARNING][3796] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.718 [INFO][3796] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.722 [INFO][3796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:03:53.729353 containerd[1483]: 2025-11-08 00:03:53.727 [INFO][3788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:03:53.731422 containerd[1483]: time="2025-11-08T00:03:53.731205708Z" level=info msg="TearDown network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" successfully" Nov 8 00:03:53.731422 containerd[1483]: time="2025-11-08T00:03:53.731246507Z" level=info msg="StopPodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" returns successfully" Nov 8 00:03:53.820811 kubelet[2577]: I1108 00:03:53.820683 2577 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv626\" (UniqueName: \"kubernetes.io/projected/eb179d1d-66bd-4e35-9424-06cc17e8420e-kube-api-access-xv626\") pod \"eb179d1d-66bd-4e35-9424-06cc17e8420e\" (UID: \"eb179d1d-66bd-4e35-9424-06cc17e8420e\") " Nov 8 00:03:53.822410 kubelet[2577]: I1108 00:03:53.820990 2577 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-backend-key-pair\") pod \"eb179d1d-66bd-4e35-9424-06cc17e8420e\" (UID: \"eb179d1d-66bd-4e35-9424-06cc17e8420e\") " Nov 8 00:03:53.822410 kubelet[2577]: I1108 00:03:53.821026 2577 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-ca-bundle\") pod \"eb179d1d-66bd-4e35-9424-06cc17e8420e\" (UID: \"eb179d1d-66bd-4e35-9424-06cc17e8420e\") " Nov 8 00:03:53.822410 kubelet[2577]: I1108 00:03:53.821449 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eb179d1d-66bd-4e35-9424-06cc17e8420e" (UID: "eb179d1d-66bd-4e35-9424-06cc17e8420e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:03:53.828012 kubelet[2577]: I1108 00:03:53.827894 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb179d1d-66bd-4e35-9424-06cc17e8420e-kube-api-access-xv626" (OuterVolumeSpecName: "kube-api-access-xv626") pod "eb179d1d-66bd-4e35-9424-06cc17e8420e" (UID: "eb179d1d-66bd-4e35-9424-06cc17e8420e"). InnerVolumeSpecName "kube-api-access-xv626". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:03:53.829433 kubelet[2577]: I1108 00:03:53.829385 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eb179d1d-66bd-4e35-9424-06cc17e8420e" (UID: "eb179d1d-66bd-4e35-9424-06cc17e8420e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:03:53.922369 kubelet[2577]: I1108 00:03:53.922252 2577 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xv626\" (UniqueName: \"kubernetes.io/projected/eb179d1d-66bd-4e35-9424-06cc17e8420e-kube-api-access-xv626\") on node \"ci-4081-3-6-n-8957f209ae\" DevicePath \"\"" Nov 8 00:03:53.922369 kubelet[2577]: I1108 00:03:53.922324 2577 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-8957f209ae\" DevicePath \"\"" Nov 8 00:03:53.922369 kubelet[2577]: I1108 00:03:53.922338 2577 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb179d1d-66bd-4e35-9424-06cc17e8420e-whisker-ca-bundle\") on node \"ci-4081-3-6-n-8957f209ae\" DevicePath \"\"" Nov 8 00:03:54.009237 systemd[1]: run-netns-cni\x2dc80aa603\x2dfd50\x2d0846\x2d4242\x2d12f44f2a26ae.mount: Deactivated successfully. Nov 8 00:03:54.009798 systemd[1]: var-lib-kubelet-pods-eb179d1d\x2d66bd\x2d4e35\x2d9424\x2d06cc17e8420e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxv626.mount: Deactivated successfully. Nov 8 00:03:54.009866 systemd[1]: var-lib-kubelet-pods-eb179d1d\x2d66bd\x2d4e35\x2d9424\x2d06cc17e8420e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:03:54.661816 systemd[1]: Removed slice kubepods-besteffort-podeb179d1d_66bd_4e35_9424_06cc17e8420e.slice - libcontainer container kubepods-besteffort-podeb179d1d_66bd_4e35_9424_06cc17e8420e.slice. Nov 8 00:03:54.750085 systemd[1]: Created slice kubepods-besteffort-pod1f74b08a_68be_4d64_8b67_dfbe823cdd4c.slice - libcontainer container kubepods-besteffort-pod1f74b08a_68be_4d64_8b67_dfbe823cdd4c.slice. Nov 8 00:03:54.828729 kubelet[2577]: I1108 00:03:54.828679 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lfz\" (UniqueName: \"kubernetes.io/projected/1f74b08a-68be-4d64-8b67-dfbe823cdd4c-kube-api-access-w6lfz\") pod \"whisker-7546c5f69c-s8fw9\" (UID: \"1f74b08a-68be-4d64-8b67-dfbe823cdd4c\") " pod="calico-system/whisker-7546c5f69c-s8fw9" Nov 8 00:03:54.828729 kubelet[2577]: I1108 00:03:54.828739 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f74b08a-68be-4d64-8b67-dfbe823cdd4c-whisker-backend-key-pair\") pod \"whisker-7546c5f69c-s8fw9\" (UID: \"1f74b08a-68be-4d64-8b67-dfbe823cdd4c\") " pod="calico-system/whisker-7546c5f69c-s8fw9" Nov 8 00:03:54.829193 kubelet[2577]: I1108 00:03:54.828760 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f74b08a-68be-4d64-8b67-dfbe823cdd4c-whisker-ca-bundle\") pod \"whisker-7546c5f69c-s8fw9\" (UID: \"1f74b08a-68be-4d64-8b67-dfbe823cdd4c\") " pod="calico-system/whisker-7546c5f69c-s8fw9" Nov 8 00:03:55.056389 containerd[1483]: time="2025-11-08T00:03:55.055867082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7546c5f69c-s8fw9,Uid:1f74b08a-68be-4d64-8b67-dfbe823cdd4c,Namespace:calico-system,Attempt:0,}" Nov 8 00:03:55.282376 systemd-networkd[1378]: calif33ac1142f4: Link UP Nov 8 00:03:55.285883 systemd-networkd[1378]: calif33ac1142f4: Gained carrier Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.105 [INFO][3948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.130 [INFO][3948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0 whisker-7546c5f69c- calico-system 1f74b08a-68be-4d64-8b67-dfbe823cdd4c 907 0 2025-11-08 00:03:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7546c5f69c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae whisker-7546c5f69c-s8fw9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif33ac1142f4 [] [] }} ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.131 [INFO][3948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.190 [INFO][3965] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" HandleID="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.190 [INFO][3965] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" HandleID="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8957f209ae", "pod":"whisker-7546c5f69c-s8fw9", "timestamp":"2025-11-08 00:03:55.190537917 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.191 [INFO][3965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.191 [INFO][3965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.191 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.209 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.219 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.225 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.228 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.233 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.233 [INFO][3965] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.236 [INFO][3965] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8 Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.250 [INFO][3965] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.257 [INFO][3965] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.1/26] block=192.168.34.0/26 handle="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.257 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.1/26] handle="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.257 [INFO][3965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:03:55.321570 containerd[1483]: 2025-11-08 00:03:55.258 [INFO][3965] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.1/26] IPv6=[] ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" HandleID="k8s-pod-network.7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.323583 containerd[1483]: 2025-11-08 00:03:55.261 [INFO][3948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0", GenerateName:"whisker-7546c5f69c-", Namespace:"calico-system", SelfLink:"", UID:"1f74b08a-68be-4d64-8b67-dfbe823cdd4c", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7546c5f69c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"whisker-7546c5f69c-s8fw9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif33ac1142f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:03:55.323583 containerd[1483]: 2025-11-08 00:03:55.266 [INFO][3948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.1/32] ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.323583 containerd[1483]: 2025-11-08 00:03:55.266 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif33ac1142f4 ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.323583 containerd[1483]: 2025-11-08 00:03:55.288 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.323583 containerd[1483]: 2025-11-08 00:03:55.291 [INFO][3948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0", GenerateName:"whisker-7546c5f69c-", Namespace:"calico-system", SelfLink:"", UID:"1f74b08a-68be-4d64-8b67-dfbe823cdd4c", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7546c5f69c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8", Pod:"whisker-7546c5f69c-s8fw9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif33ac1142f4", MAC:"02:d5:da:ab:1a:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:03:55.323583 containerd[1483]: 2025-11-08 00:03:55.317 [INFO][3948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8" Namespace="calico-system" Pod="whisker-7546c5f69c-s8fw9" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--7546c5f69c--s8fw9-eth0" Nov 8 00:03:55.353028 containerd[1483]: time="2025-11-08T00:03:55.352205072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:55.353351 containerd[1483]: time="2025-11-08T00:03:55.353179507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:55.353351 containerd[1483]: time="2025-11-08T00:03:55.353223227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:55.354494 containerd[1483]: time="2025-11-08T00:03:55.354338382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:55.370422 kubelet[2577]: I1108 00:03:55.370379 2577 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb179d1d-66bd-4e35-9424-06cc17e8420e" path="/var/lib/kubelet/pods/eb179d1d-66bd-4e35-9424-06cc17e8420e/volumes" Nov 8 00:03:55.421969 kernel: bpftool[4019]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:03:55.432531 systemd[1]: Started cri-containerd-7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8.scope - libcontainer container 7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8. Nov 8 00:03:55.527656 containerd[1483]: time="2025-11-08T00:03:55.527613004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7546c5f69c-s8fw9,Uid:1f74b08a-68be-4d64-8b67-dfbe823cdd4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7dd5adbfb598a01221c5ab7e2123584f3fe9b7c9def8902671b6ca277aea85f8\"" Nov 8 00:03:55.536619 containerd[1483]: time="2025-11-08T00:03:55.536271725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:03:55.742722 systemd-networkd[1378]: vxlan.calico: Link UP Nov 8 00:03:55.742733 systemd-networkd[1378]: vxlan.calico: Gained carrier Nov 8 00:03:55.900340 containerd[1483]: time="2025-11-08T00:03:55.900242411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:03:55.902472 containerd[1483]: time="2025-11-08T00:03:55.901918924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:03:55.902472 containerd[1483]: time="2025-11-08T00:03:55.902035163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:03:55.904994 kubelet[2577]: E1108 00:03:55.904906 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:03:55.905819 kubelet[2577]: E1108 00:03:55.905463 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:03:55.907350 kubelet[2577]: E1108 00:03:55.907243 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eccc3e646eb4756b217ff171cbc1340,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:03:55.910897 containerd[1483]: time="2025-11-08T00:03:55.910662164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:03:56.264661 containerd[1483]: time="2025-11-08T00:03:56.264551864Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:03:56.267554 containerd[1483]: time="2025-11-08T00:03:56.267439896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:03:56.267729 containerd[1483]: time="2025-11-08T00:03:56.267599656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:03:56.268035 kubelet[2577]: E1108 00:03:56.267980 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:03:56.268132 kubelet[2577]: E1108 00:03:56.268052 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:03:56.269030 kubelet[2577]: E1108 00:03:56.268219 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:03:56.269821 kubelet[2577]: E1108 00:03:56.269663 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:03:56.664574 kubelet[2577]: E1108 00:03:56.663668 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:03:57.262248 systemd-networkd[1378]: calif33ac1142f4: Gained IPv6LL Nov 8 00:03:57.518348 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Nov 8 00:03:58.363684 containerd[1483]: time="2025-11-08T00:03:58.363179437Z" level=info msg="StopPodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\"" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.427 [INFO][4159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.428 [INFO][4159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" iface="eth0" netns="/var/run/netns/cni-87b61e02-a4c9-4931-cf3f-4458679b4c46" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.428 [INFO][4159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" iface="eth0" netns="/var/run/netns/cni-87b61e02-a4c9-4931-cf3f-4458679b4c46" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.428 [INFO][4159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" iface="eth0" netns="/var/run/netns/cni-87b61e02-a4c9-4931-cf3f-4458679b4c46" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.428 [INFO][4159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.428 [INFO][4159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.454 [INFO][4166] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.454 [INFO][4166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.455 [INFO][4166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.467 [WARNING][4166] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.467 [INFO][4166] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.469 [INFO][4166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:03:58.475588 containerd[1483]: 2025-11-08 00:03:58.471 [INFO][4159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:03:58.476468 containerd[1483]: time="2025-11-08T00:03:58.476178608Z" level=info msg="TearDown network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" successfully" Nov 8 00:03:58.476468 containerd[1483]: time="2025-11-08T00:03:58.476218288Z" level=info msg="StopPodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" returns successfully" Nov 8 00:03:58.477374 containerd[1483]: time="2025-11-08T00:03:58.477339049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-b22vr,Uid:36248f5d-e7be-4c9e-8bf1-2e53872f633b,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:03:58.478166 systemd[1]: run-netns-cni\x2d87b61e02\x2da4c9\x2d4931\x2dcf3f\x2d4458679b4c46.mount: Deactivated successfully. Nov 8 00:03:58.635021 systemd-networkd[1378]: cali0929bfd3111: Link UP Nov 8 00:03:58.635224 systemd-networkd[1378]: cali0929bfd3111: Gained carrier Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.537 [INFO][4173] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0 calico-apiserver-5bbbbfdffc- calico-apiserver 36248f5d-e7be-4c9e-8bf1-2e53872f633b 934 0 2025-11-08 00:03:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bbbbfdffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae calico-apiserver-5bbbbfdffc-b22vr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0929bfd3111 [] [] }} ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.537 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.568 [INFO][4185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" HandleID="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.568 [INFO][4185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" HandleID="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-8957f209ae", "pod":"calico-apiserver-5bbbbfdffc-b22vr", "timestamp":"2025-11-08 00:03:58.56859273 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.568 [INFO][4185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.568 [INFO][4185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.568 [INFO][4185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.582 [INFO][4185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.588 [INFO][4185] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.595 [INFO][4185] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.598 [INFO][4185] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.601 [INFO][4185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.602 [INFO][4185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.604 [INFO][4185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33 Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.613 [INFO][4185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.626 [INFO][4185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.2/26] block=192.168.34.0/26 handle="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.626 [INFO][4185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.2/26] handle="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.627 [INFO][4185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:03:58.656979 containerd[1483]: 2025-11-08 00:03:58.627 [INFO][4185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.2/26] IPv6=[] ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" HandleID="k8s-pod-network.f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.657640 containerd[1483]: 2025-11-08 00:03:58.630 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"36248f5d-e7be-4c9e-8bf1-2e53872f633b", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"calico-apiserver-5bbbbfdffc-b22vr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0929bfd3111", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:03:58.657640 containerd[1483]: 2025-11-08 00:03:58.630 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.2/32] ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.657640 containerd[1483]: 2025-11-08 00:03:58.630 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0929bfd3111 ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.657640 containerd[1483]: 2025-11-08 00:03:58.634 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.657640 containerd[1483]: 2025-11-08 00:03:58.636 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"36248f5d-e7be-4c9e-8bf1-2e53872f633b", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33", Pod:"calico-apiserver-5bbbbfdffc-b22vr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0929bfd3111", MAC:"62:7a:3b:19:37:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:03:58.657640 containerd[1483]: 2025-11-08 00:03:58.651 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-b22vr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:03:58.679016 containerd[1483]: time="2025-11-08T00:03:58.678021460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:58.679016 containerd[1483]: time="2025-11-08T00:03:58.678170220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:58.679016 containerd[1483]: time="2025-11-08T00:03:58.678233060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:58.679016 containerd[1483]: time="2025-11-08T00:03:58.678429460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:58.711215 systemd[1]: Started cri-containerd-f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33.scope - libcontainer container f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33. Nov 8 00:03:58.756469 containerd[1483]: time="2025-11-08T00:03:58.756429376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-b22vr,Uid:36248f5d-e7be-4c9e-8bf1-2e53872f633b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33\"" Nov 8 00:03:58.758513 containerd[1483]: time="2025-11-08T00:03:58.758291297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:03:59.116241 containerd[1483]: time="2025-11-08T00:03:59.116175478Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:03:59.118174 containerd[1483]: time="2025-11-08T00:03:59.118096602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:03:59.118453 containerd[1483]: time="2025-11-08T00:03:59.118278322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:03:59.118582 kubelet[2577]: E1108 00:03:59.118512 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:03:59.119243 kubelet[2577]: E1108 00:03:59.118586 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:03:59.119243 kubelet[2577]: E1108 00:03:59.118801 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzlmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:03:59.120214 kubelet[2577]: E1108 00:03:59.120041 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:03:59.365922 containerd[1483]: time="2025-11-08T00:03:59.365811138Z" level=info msg="StopPodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\"" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.432 [INFO][4253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.433 [INFO][4253] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" iface="eth0" netns="/var/run/netns/cni-b8aeee46-6e12-5d80-8aef-6ce52f545838" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.433 [INFO][4253] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" iface="eth0" netns="/var/run/netns/cni-b8aeee46-6e12-5d80-8aef-6ce52f545838" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.435 [INFO][4253] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" iface="eth0" netns="/var/run/netns/cni-b8aeee46-6e12-5d80-8aef-6ce52f545838" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.435 [INFO][4253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.435 [INFO][4253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.469 [INFO][4260] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.469 [INFO][4260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.469 [INFO][4260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.482 [WARNING][4260] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.482 [INFO][4260] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.484 [INFO][4260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:03:59.490210 containerd[1483]: 2025-11-08 00:03:59.487 [INFO][4253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:03:59.492440 containerd[1483]: time="2025-11-08T00:03:59.492040791Z" level=info msg="TearDown network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" successfully" Nov 8 00:03:59.492440 containerd[1483]: time="2025-11-08T00:03:59.492077391Z" level=info msg="StopPodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" returns successfully" Nov 8 00:03:59.495837 containerd[1483]: time="2025-11-08T00:03:59.495415437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q8wbq,Uid:6b199b53-44ba-445d-8690-b906dab10cbb,Namespace:kube-system,Attempt:1,}" Nov 8 00:03:59.496645 systemd[1]: run-netns-cni\x2db8aeee46\x2d6e12\x2d5d80\x2d8aef\x2d6ce52f545838.mount: Deactivated successfully. Nov 8 00:03:59.651605 systemd-networkd[1378]: cali03cc715cf58: Link UP Nov 8 00:03:59.652443 systemd-networkd[1378]: cali03cc715cf58: Gained carrier Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.565 [INFO][4267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0 coredns-674b8bbfcf- kube-system 6b199b53-44ba-445d-8690-b906dab10cbb 944 0 2025-11-08 00:03:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae coredns-674b8bbfcf-q8wbq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali03cc715cf58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.566 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.598 [INFO][4278] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" HandleID="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.598 [INFO][4278] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" HandleID="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b770), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-8957f209ae", "pod":"coredns-674b8bbfcf-q8wbq", "timestamp":"2025-11-08 00:03:59.598134483 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.598 [INFO][4278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.598 [INFO][4278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.598 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.609 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.616 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.622 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.625 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.627 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.628 [INFO][4278] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.630 [INFO][4278] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.635 [INFO][4278] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.645 [INFO][4278] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.3/26] block=192.168.34.0/26 handle="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.645 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.3/26] handle="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.645 [INFO][4278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:03:59.675029 containerd[1483]: 2025-11-08 00:03:59.645 [INFO][4278] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.3/26] IPv6=[] ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" HandleID="k8s-pod-network.a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.679003 containerd[1483]: 2025-11-08 00:03:59.647 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b199b53-44ba-445d-8690-b906dab10cbb", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"coredns-674b8bbfcf-q8wbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03cc715cf58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:03:59.679003 containerd[1483]: 2025-11-08 00:03:59.648 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.3/32] ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.679003 containerd[1483]: 2025-11-08 00:03:59.648 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03cc715cf58 ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.679003 containerd[1483]: 2025-11-08 00:03:59.653 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.679003 containerd[1483]: 2025-11-08 00:03:59.653 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b199b53-44ba-445d-8690-b906dab10cbb", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f", Pod:"coredns-674b8bbfcf-q8wbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03cc715cf58", MAC:"26:d3:e2:3e:f6:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:03:59.679003 containerd[1483]: 2025-11-08 00:03:59.669 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f" Namespace="kube-system" Pod="coredns-674b8bbfcf-q8wbq" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:03:59.680533 kubelet[2577]: E1108 00:03:59.677327 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:03:59.707212 containerd[1483]: time="2025-11-08T00:03:59.705031697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:03:59.707212 containerd[1483]: time="2025-11-08T00:03:59.705085777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:03:59.707212 containerd[1483]: time="2025-11-08T00:03:59.705097057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:59.707212 containerd[1483]: time="2025-11-08T00:03:59.705172498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:03:59.738138 systemd[1]: Started cri-containerd-a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f.scope - libcontainer container a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f. Nov 8 00:03:59.778769 containerd[1483]: time="2025-11-08T00:03:59.778701565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q8wbq,Uid:6b199b53-44ba-445d-8690-b906dab10cbb,Namespace:kube-system,Attempt:1,} returns sandbox id \"a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f\"" Nov 8 00:03:59.784293 containerd[1483]: time="2025-11-08T00:03:59.784238216Z" level=info msg="CreateContainer within sandbox \"a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:03:59.797180 containerd[1483]: time="2025-11-08T00:03:59.797081522Z" level=info msg="CreateContainer within sandbox \"a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"afcc4e11fe3b27a7a1f67d36207e32dfabb088b12b8ee66b48f5962cecaa7f6d\"" Nov 8 00:03:59.798908 containerd[1483]: time="2025-11-08T00:03:59.798002043Z" level=info msg="StartContainer for \"afcc4e11fe3b27a7a1f67d36207e32dfabb088b12b8ee66b48f5962cecaa7f6d\"" Nov 8 00:03:59.828154 systemd[1]: Started cri-containerd-afcc4e11fe3b27a7a1f67d36207e32dfabb088b12b8ee66b48f5962cecaa7f6d.scope - libcontainer container afcc4e11fe3b27a7a1f67d36207e32dfabb088b12b8ee66b48f5962cecaa7f6d. Nov 8 00:03:59.858723 containerd[1483]: time="2025-11-08T00:03:59.858621925Z" level=info msg="StartContainer for \"afcc4e11fe3b27a7a1f67d36207e32dfabb088b12b8ee66b48f5962cecaa7f6d\" returns successfully" Nov 8 00:04:00.271050 systemd-networkd[1378]: cali0929bfd3111: Gained IPv6LL Nov 8 00:04:00.365071 containerd[1483]: time="2025-11-08T00:04:00.364432682Z" level=info msg="StopPodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\"" Nov 8 00:04:00.365071 containerd[1483]: time="2025-11-08T00:04:00.364636323Z" level=info msg="StopPodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\"" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.450 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.450 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" iface="eth0" netns="/var/run/netns/cni-3dc93b26-c7fb-b2d0-4928-7e5496ec40a5" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.450 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" iface="eth0" netns="/var/run/netns/cni-3dc93b26-c7fb-b2d0-4928-7e5496ec40a5" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.450 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" iface="eth0" netns="/var/run/netns/cni-3dc93b26-c7fb-b2d0-4928-7e5496ec40a5" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.450 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.450 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.529 [INFO][4405] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.529 [INFO][4405] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.529 [INFO][4405] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.540 [WARNING][4405] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.540 [INFO][4405] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.544 [INFO][4405] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:00.550717 containerd[1483]: 2025-11-08 00:04:00.548 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:00.551559 containerd[1483]: time="2025-11-08T00:04:00.551418697Z" level=info msg="TearDown network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" successfully" Nov 8 00:04:00.551559 containerd[1483]: time="2025-11-08T00:04:00.551453137Z" level=info msg="StopPodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" returns successfully" Nov 8 00:04:00.553357 containerd[1483]: time="2025-11-08T00:04:00.553313704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c87cb4cfb-m9pm4,Uid:d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9,Namespace:calico-system,Attempt:1,}" Nov 8 00:04:00.558029 systemd[1]: run-netns-cni\x2d3dc93b26\x2dc7fb\x2db2d0\x2d4928\x2d7e5496ec40a5.mount: Deactivated successfully. Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.463 [INFO][4396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.463 [INFO][4396] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" iface="eth0" netns="/var/run/netns/cni-1a83ca9e-d91d-1540-7a3f-2a284b60670c" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.464 [INFO][4396] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" iface="eth0" netns="/var/run/netns/cni-1a83ca9e-d91d-1540-7a3f-2a284b60670c" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.464 [INFO][4396] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" iface="eth0" netns="/var/run/netns/cni-1a83ca9e-d91d-1540-7a3f-2a284b60670c" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.464 [INFO][4396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.464 [INFO][4396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.532 [INFO][4411] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.532 [INFO][4411] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.546 [INFO][4411] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.564 [WARNING][4411] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.564 [INFO][4411] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.567 [INFO][4411] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:00.579735 containerd[1483]: 2025-11-08 00:04:00.570 [INFO][4396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:00.583983 containerd[1483]: time="2025-11-08T00:04:00.581145921Z" level=info msg="TearDown network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" successfully" Nov 8 00:04:00.583983 containerd[1483]: time="2025-11-08T00:04:00.581212721Z" level=info msg="StopPodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" returns successfully" Nov 8 00:04:00.587771 systemd[1]: run-netns-cni\x2d1a83ca9e\x2dd91d\x2d1540\x2d7a3f\x2d2a284b60670c.mount: Deactivated successfully. Nov 8 00:04:00.588699 containerd[1483]: time="2025-11-08T00:04:00.588657147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-6m8tj,Uid:e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:04:00.689475 kubelet[2577]: E1108 00:04:00.689338 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:04:00.717914 kubelet[2577]: I1108 00:04:00.717545 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q8wbq" podStartSLOduration=47.717525558 podStartE2EDuration="47.717525558s" podCreationTimestamp="2025-11-08 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:04:00.715722952 +0000 UTC m=+55.501300925" watchObservedRunningTime="2025-11-08 00:04:00.717525558 +0000 UTC m=+55.503103411" Nov 8 00:04:00.720047 systemd-networkd[1378]: cali03cc715cf58: Gained IPv6LL Nov 8 00:04:00.842636 systemd-networkd[1378]: califd14a8c7bb4: Link UP Nov 8 00:04:00.842858 systemd-networkd[1378]: califd14a8c7bb4: Gained carrier Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.651 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0 calico-kube-controllers-6c87cb4cfb- calico-system d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9 964 0 2025-11-08 00:03:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c87cb4cfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae calico-kube-controllers-6c87cb4cfb-m9pm4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd14a8c7bb4 [] [] }} ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.651 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.730 [INFO][4442] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" HandleID="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.731 [INFO][4442] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" HandleID="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8957f209ae", "pod":"calico-kube-controllers-6c87cb4cfb-m9pm4", "timestamp":"2025-11-08 00:04:00.730311963 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.731 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.731 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.731 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.759 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.773 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.791 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.799 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.806 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.806 [INFO][4442] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.810 [INFO][4442] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526 Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.818 [INFO][4442] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.831 [INFO][4442] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.4/26] block=192.168.34.0/26 handle="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.831 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.4/26] handle="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.832 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:00.886057 containerd[1483]: 2025-11-08 00:04:00.832 [INFO][4442] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.4/26] IPv6=[] ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" HandleID="k8s-pod-network.fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.887804 containerd[1483]: 2025-11-08 00:04:00.837 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0", GenerateName:"calico-kube-controllers-6c87cb4cfb-", Namespace:"calico-system", SelfLink:"", UID:"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c87cb4cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"calico-kube-controllers-6c87cb4cfb-m9pm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd14a8c7bb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:00.887804 containerd[1483]: 2025-11-08 00:04:00.837 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.4/32] ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.887804 containerd[1483]: 2025-11-08 00:04:00.837 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd14a8c7bb4 ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.887804 containerd[1483]: 2025-11-08 00:04:00.841 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.887804 containerd[1483]: 2025-11-08 00:04:00.843 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0", GenerateName:"calico-kube-controllers-6c87cb4cfb-", Namespace:"calico-system", SelfLink:"", UID:"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c87cb4cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526", Pod:"calico-kube-controllers-6c87cb4cfb-m9pm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd14a8c7bb4", MAC:"d6:f1:4a:fd:19:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:00.887804 containerd[1483]: 2025-11-08 00:04:00.883 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526" Namespace="calico-system" Pod="calico-kube-controllers-6c87cb4cfb-m9pm4" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:00.932470 containerd[1483]: time="2025-11-08T00:04:00.931767068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:00.932470 containerd[1483]: time="2025-11-08T00:04:00.931890949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:00.932470 containerd[1483]: time="2025-11-08T00:04:00.932059389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:00.932470 containerd[1483]: time="2025-11-08T00:04:00.932319830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:00.953263 systemd-networkd[1378]: cali65c1cfb4bde: Link UP Nov 8 00:04:00.953553 systemd-networkd[1378]: cali65c1cfb4bde: Gained carrier Nov 8 00:04:00.979560 systemd[1]: Started cri-containerd-fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526.scope - libcontainer container fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526. Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.690 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0 calico-apiserver-5bbbbfdffc- calico-apiserver e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d 965 0 2025-11-08 00:03:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bbbbfdffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae calico-apiserver-5bbbbfdffc-6m8tj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65c1cfb4bde [] [] }} ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.690 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.762 [INFO][4448] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" HandleID="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.763 [INFO][4448] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" HandleID="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-8957f209ae", "pod":"calico-apiserver-5bbbbfdffc-6m8tj", "timestamp":"2025-11-08 00:04:00.762931397 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.763 [INFO][4448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.832 [INFO][4448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.832 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.865 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.880 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.895 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.901 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.905 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.906 [INFO][4448] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.909 [INFO][4448] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.924 [INFO][4448] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.941 [INFO][4448] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.5/26] block=192.168.34.0/26 handle="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.941 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.5/26] handle="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.941 [INFO][4448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:00.985202 containerd[1483]: 2025-11-08 00:04:00.941 [INFO][4448] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.5/26] IPv6=[] ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" HandleID="k8s-pod-network.33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.987587 containerd[1483]: 2025-11-08 00:04:00.947 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"calico-apiserver-5bbbbfdffc-6m8tj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65c1cfb4bde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:00.987587 containerd[1483]: 2025-11-08 00:04:00.947 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.5/32] ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.987587 containerd[1483]: 2025-11-08 00:04:00.947 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65c1cfb4bde ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.987587 containerd[1483]: 2025-11-08 00:04:00.955 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:00.987587 containerd[1483]: 2025-11-08 00:04:00.955 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b", Pod:"calico-apiserver-5bbbbfdffc-6m8tj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65c1cfb4bde", MAC:"16:1e:9c:43:c9:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:00.987587 containerd[1483]: 2025-11-08 00:04:00.978 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbbfdffc-6m8tj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:01.018160 containerd[1483]: time="2025-11-08T00:04:01.017924115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:01.018561 containerd[1483]: time="2025-11-08T00:04:01.018148756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:01.018561 containerd[1483]: time="2025-11-08T00:04:01.018164396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:01.018561 containerd[1483]: time="2025-11-08T00:04:01.018276356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:01.054596 systemd[1]: Started cri-containerd-33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b.scope - libcontainer container 33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b. Nov 8 00:04:01.069889 containerd[1483]: time="2025-11-08T00:04:01.069672011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c87cb4cfb-m9pm4,Uid:d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526\"" Nov 8 00:04:01.078904 containerd[1483]: time="2025-11-08T00:04:01.078703416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:04:01.125669 containerd[1483]: time="2025-11-08T00:04:01.125628128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbbfdffc-6m8tj,Uid:e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b\"" Nov 8 00:04:01.371006 containerd[1483]: time="2025-11-08T00:04:01.370686422Z" level=info msg="StopPodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\"" Nov 8 00:04:01.371006 containerd[1483]: time="2025-11-08T00:04:01.370812902Z" level=info msg="StopPodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\"" Nov 8 00:04:01.373161 containerd[1483]: time="2025-11-08T00:04:01.370686502Z" level=info msg="StopPodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\"" Nov 8 00:04:01.433210 containerd[1483]: time="2025-11-08T00:04:01.433142971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:01.439082 containerd[1483]: time="2025-11-08T00:04:01.439023440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:04:01.439764 containerd[1483]: time="2025-11-08T00:04:01.439569043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:04:01.441005 kubelet[2577]: E1108 00:04:01.439838 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:04:01.441005 kubelet[2577]: E1108 00:04:01.439892 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:04:01.441005 kubelet[2577]: E1108 00:04:01.440111 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgh2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:01.444715 kubelet[2577]: E1108 00:04:01.443893 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:04:01.446755 containerd[1483]: time="2025-11-08T00:04:01.444992230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.471 [INFO][4596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.471 [INFO][4596] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" iface="eth0" netns="/var/run/netns/cni-6a137e06-dfca-fdde-bbf1-865778eefbf9" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.472 [INFO][4596] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" iface="eth0" netns="/var/run/netns/cni-6a137e06-dfca-fdde-bbf1-865778eefbf9" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.472 [INFO][4596] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" iface="eth0" netns="/var/run/netns/cni-6a137e06-dfca-fdde-bbf1-865778eefbf9" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.472 [INFO][4596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.472 [INFO][4596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.550 [INFO][4614] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.551 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.551 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.576 [WARNING][4614] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.576 [INFO][4614] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.579 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:01.587909 containerd[1483]: 2025-11-08 00:04:01.583 [INFO][4596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:01.593829 containerd[1483]: time="2025-11-08T00:04:01.589598146Z" level=info msg="TearDown network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" successfully" Nov 8 00:04:01.593829 containerd[1483]: time="2025-11-08T00:04:01.589643586Z" level=info msg="StopPodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" returns successfully" Nov 8 00:04:01.593829 containerd[1483]: time="2025-11-08T00:04:01.593427965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzfvr,Uid:879786e9-e895-409c-b334-437a5736f56f,Namespace:kube-system,Attempt:1,}" Nov 8 00:04:01.591861 systemd[1]: run-netns-cni\x2d6a137e06\x2ddfca\x2dfdde\x2dbbf1\x2d865778eefbf9.mount: Deactivated successfully. Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.535 [INFO][4595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.538 [INFO][4595] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" iface="eth0" netns="/var/run/netns/cni-b57d7a98-8b10-8026-8098-fe2bcb18cd8d" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.539 [INFO][4595] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" iface="eth0" netns="/var/run/netns/cni-b57d7a98-8b10-8026-8098-fe2bcb18cd8d" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.540 [INFO][4595] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" iface="eth0" netns="/var/run/netns/cni-b57d7a98-8b10-8026-8098-fe2bcb18cd8d" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.540 [INFO][4595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.540 [INFO][4595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.581 [INFO][4627] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.584 [INFO][4627] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.585 [INFO][4627] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.605 [WARNING][4627] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.605 [INFO][4627] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.609 [INFO][4627] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:01.626866 containerd[1483]: 2025-11-08 00:04:01.614 [INFO][4595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:01.632802 systemd[1]: run-netns-cni\x2db57d7a98\x2d8b10\x2d8026\x2d8098\x2dfe2bcb18cd8d.mount: Deactivated successfully. Nov 8 00:04:01.633237 containerd[1483]: time="2025-11-08T00:04:01.626857890Z" level=info msg="TearDown network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" successfully" Nov 8 00:04:01.633362 containerd[1483]: time="2025-11-08T00:04:01.633237282Z" level=info msg="StopPodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" returns successfully" Nov 8 00:04:01.636760 containerd[1483]: time="2025-11-08T00:04:01.636685699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hbs,Uid:6a33abd5-ae6f-4042-bbab-6affce6535d7,Namespace:calico-system,Attempt:1,}" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.525 [INFO][4604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.525 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" iface="eth0" netns="/var/run/netns/cni-c1c0d1a6-8c33-c2cb-d8d8-18dd72b90514" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.525 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" iface="eth0" netns="/var/run/netns/cni-c1c0d1a6-8c33-c2cb-d8d8-18dd72b90514" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.526 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" iface="eth0" netns="/var/run/netns/cni-c1c0d1a6-8c33-c2cb-d8d8-18dd72b90514" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.527 [INFO][4604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.527 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.588 [INFO][4622] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.588 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.610 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.629 [WARNING][4622] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.629 [INFO][4622] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.637 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:01.646019 containerd[1483]: 2025-11-08 00:04:01.642 [INFO][4604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:01.650237 systemd[1]: run-netns-cni\x2dc1c0d1a6\x2d8c33\x2dc2cb\x2dd8d8\x2d18dd72b90514.mount: Deactivated successfully. Nov 8 00:04:01.652312 containerd[1483]: time="2025-11-08T00:04:01.651180131Z" level=info msg="TearDown network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" successfully" Nov 8 00:04:01.652511 containerd[1483]: time="2025-11-08T00:04:01.652314616Z" level=info msg="StopPodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" returns successfully" Nov 8 00:04:01.654030 containerd[1483]: time="2025-11-08T00:04:01.653921424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cxpqj,Uid:8027ad8b-f646-4861-aed8-35b2e3d85698,Namespace:calico-system,Attempt:1,}" Nov 8 00:04:01.702790 kubelet[2577]: E1108 00:04:01.702154 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:04:01.821018 containerd[1483]: time="2025-11-08T00:04:01.820453969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:01.824374 containerd[1483]: time="2025-11-08T00:04:01.824319148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:04:01.826069 containerd[1483]: time="2025-11-08T00:04:01.824643030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:01.826201 kubelet[2577]: E1108 00:04:01.824796 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:01.826201 kubelet[2577]: E1108 00:04:01.824844 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:01.826201 kubelet[2577]: E1108 00:04:01.824990 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:01.827085 kubelet[2577]: E1108 00:04:01.826882 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:04:01.910907 systemd-networkd[1378]: calie12dc25f3c9: Link UP Nov 8 00:04:01.912617 systemd-networkd[1378]: calie12dc25f3c9: Gained carrier Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.706 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0 coredns-674b8bbfcf- kube-system 879786e9-e895-409c-b334-437a5736f56f 992 0 2025-11-08 00:03:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae coredns-674b8bbfcf-tzfvr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie12dc25f3c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.707 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.804 [INFO][4671] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" HandleID="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.805 [INFO][4671] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" HandleID="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-8957f209ae", "pod":"coredns-674b8bbfcf-tzfvr", "timestamp":"2025-11-08 00:04:01.804988852 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.805 [INFO][4671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.805 [INFO][4671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.805 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.835 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.848 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.859 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.863 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.868 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.869 [INFO][4671] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.871 [INFO][4671] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05 Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.881 [INFO][4671] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.893 [INFO][4671] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.6/26] block=192.168.34.0/26 handle="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.894 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.6/26] handle="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.894 [INFO][4671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:01.940647 containerd[1483]: 2025-11-08 00:04:01.894 [INFO][4671] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.6/26] IPv6=[] ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" HandleID="k8s-pod-network.563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.941649 containerd[1483]: 2025-11-08 00:04:01.899 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"879786e9-e895-409c-b334-437a5736f56f", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"coredns-674b8bbfcf-tzfvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12dc25f3c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:01.941649 containerd[1483]: 2025-11-08 00:04:01.899 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.6/32] ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.941649 containerd[1483]: 2025-11-08 00:04:01.899 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie12dc25f3c9 ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.941649 containerd[1483]: 2025-11-08 00:04:01.914 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.941649 containerd[1483]: 2025-11-08 00:04:01.914 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"879786e9-e895-409c-b334-437a5736f56f", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05", Pod:"coredns-674b8bbfcf-tzfvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12dc25f3c9", MAC:"c6:1f:8d:85:34:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:01.941649 containerd[1483]: 2025-11-08 00:04:01.930 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzfvr" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:01.983830 containerd[1483]: time="2025-11-08T00:04:01.983223295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:01.983830 containerd[1483]: time="2025-11-08T00:04:01.983726017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:01.984298 containerd[1483]: time="2025-11-08T00:04:01.983886338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:01.984901 containerd[1483]: time="2025-11-08T00:04:01.984553341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:02.018316 systemd[1]: Started cri-containerd-563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05.scope - libcontainer container 563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05. Nov 8 00:04:02.019730 systemd-networkd[1378]: calicdd7fe47296: Link UP Nov 8 00:04:02.021365 systemd-networkd[1378]: calicdd7fe47296: Gained carrier Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.783 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0 csi-node-driver- calico-system 6a33abd5-ae6f-4042-bbab-6affce6535d7 994 0 2025-11-08 00:03:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae csi-node-driver-f6hbs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicdd7fe47296 [] [] }} ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.783 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.860 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" HandleID="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.860 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" HandleID="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000285a20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8957f209ae", "pod":"csi-node-driver-f6hbs", "timestamp":"2025-11-08 00:04:01.860333246 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.860 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.895 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.895 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.938 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.951 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.960 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.965 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.972 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.972 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.975 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13 Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:01.989 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:02.001 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.7/26] block=192.168.34.0/26 handle="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:02.001 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.7/26] handle="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:02.001 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:02.054979 containerd[1483]: 2025-11-08 00:04:02.001 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.7/26] IPv6=[] ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" HandleID="k8s-pod-network.834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.056605 containerd[1483]: 2025-11-08 00:04:02.013 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a33abd5-ae6f-4042-bbab-6affce6535d7", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"csi-node-driver-f6hbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7fe47296", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:02.056605 containerd[1483]: 2025-11-08 00:04:02.013 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.7/32] ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.056605 containerd[1483]: 2025-11-08 00:04:02.013 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdd7fe47296 ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.056605 containerd[1483]: 2025-11-08 00:04:02.022 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.056605 containerd[1483]: 2025-11-08 00:04:02.022 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a33abd5-ae6f-4042-bbab-6affce6535d7", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13", Pod:"csi-node-driver-f6hbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7fe47296", MAC:"d6:9c:6c:51:28:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:02.056605 containerd[1483]: 2025-11-08 00:04:02.049 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13" Namespace="calico-system" Pod="csi-node-driver-f6hbs" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:02.118959 containerd[1483]: time="2025-11-08T00:04:02.118133768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:02.118959 containerd[1483]: time="2025-11-08T00:04:02.118536250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:02.118959 containerd[1483]: time="2025-11-08T00:04:02.118563570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:02.118959 containerd[1483]: time="2025-11-08T00:04:02.118724052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:02.123885 containerd[1483]: time="2025-11-08T00:04:02.122902918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzfvr,Uid:879786e9-e895-409c-b334-437a5736f56f,Namespace:kube-system,Attempt:1,} returns sandbox id \"563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05\"" Nov 8 00:04:02.134897 containerd[1483]: time="2025-11-08T00:04:02.134744313Z" level=info msg="CreateContainer within sandbox \"563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:04:02.146884 systemd-networkd[1378]: cali7a11a6a9a3d: Link UP Nov 8 00:04:02.147866 systemd-networkd[1378]: cali7a11a6a9a3d: Gained carrier Nov 8 00:04:02.175584 systemd[1]: Started cri-containerd-834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13.scope - libcontainer container 834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13. Nov 8 00:04:02.183573 containerd[1483]: time="2025-11-08T00:04:02.183527344Z" level=info msg="CreateContainer within sandbox \"563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7601c7bad051ed80ee36c0bedd85c2f64f0dcff081a5b5cd0e979b330b44f588\"" Nov 8 00:04:02.184879 containerd[1483]: time="2025-11-08T00:04:02.184831952Z" level=info msg="StartContainer for \"7601c7bad051ed80ee36c0bedd85c2f64f0dcff081a5b5cd0e979b330b44f588\"" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:01.788 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0 goldmane-666569f655- calico-system 8027ad8b-f646-4861-aed8-35b2e3d85698 993 0 2025-11-08 00:03:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-8957f209ae goldmane-666569f655-cxpqj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a11a6a9a3d [] [] }} ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:01.789 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:01.861 [INFO][4683] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" HandleID="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:01.861 [INFO][4683] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" HandleID="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8957f209ae", "pod":"goldmane-666569f655-cxpqj", "timestamp":"2025-11-08 00:04:01.861327251 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8957f209ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:01.861 [INFO][4683] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.001 [INFO][4683] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.002 [INFO][4683] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8957f209ae' Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.046 [INFO][4683] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.065 [INFO][4683] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.080 [INFO][4683] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.087 [INFO][4683] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.094 [INFO][4683] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.095 [INFO][4683] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.102 [INFO][4683] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222 Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.118 [INFO][4683] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.137 [INFO][4683] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.8/26] block=192.168.34.0/26 handle="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.137 [INFO][4683] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.8/26] handle="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" host="ci-4081-3-6-n-8957f209ae" Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.137 [INFO][4683] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:02.191340 containerd[1483]: 2025-11-08 00:04:02.137 [INFO][4683] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.8/26] IPv6=[] ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" HandleID="k8s-pod-network.3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.192791 containerd[1483]: 2025-11-08 00:04:02.143 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8027ad8b-f646-4861-aed8-35b2e3d85698", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"", Pod:"goldmane-666569f655-cxpqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a11a6a9a3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:02.192791 containerd[1483]: 2025-11-08 00:04:02.144 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.8/32] ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.192791 containerd[1483]: 2025-11-08 00:04:02.144 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a11a6a9a3d ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.192791 containerd[1483]: 2025-11-08 00:04:02.150 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.192791 containerd[1483]: 2025-11-08 00:04:02.151 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8027ad8b-f646-4861-aed8-35b2e3d85698", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222", Pod:"goldmane-666569f655-cxpqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a11a6a9a3d", MAC:"ca:c2:21:32:2e:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:02.192791 containerd[1483]: 2025-11-08 00:04:02.185 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222" Namespace="calico-system" Pod="goldmane-666569f655-cxpqj" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:02.222160 containerd[1483]: time="2025-11-08T00:04:02.220991622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:02.222491 containerd[1483]: time="2025-11-08T00:04:02.222232510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:02.223786 containerd[1483]: time="2025-11-08T00:04:02.223673079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:02.225306 containerd[1483]: time="2025-11-08T00:04:02.225205289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:02.228364 systemd[1]: Started cri-containerd-7601c7bad051ed80ee36c0bedd85c2f64f0dcff081a5b5cd0e979b330b44f588.scope - libcontainer container 7601c7bad051ed80ee36c0bedd85c2f64f0dcff081a5b5cd0e979b330b44f588. Nov 8 00:04:02.278790 systemd[1]: Started cri-containerd-3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222.scope - libcontainer container 3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222. Nov 8 00:04:02.284129 containerd[1483]: time="2025-11-08T00:04:02.284070543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hbs,Uid:6a33abd5-ae6f-4042-bbab-6affce6535d7,Namespace:calico-system,Attempt:1,} returns sandbox id \"834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13\"" Nov 8 00:04:02.289024 containerd[1483]: time="2025-11-08T00:04:02.288367610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:04:02.306783 containerd[1483]: time="2025-11-08T00:04:02.306722327Z" level=info msg="StartContainer for \"7601c7bad051ed80ee36c0bedd85c2f64f0dcff081a5b5cd0e979b330b44f588\" returns successfully" Nov 8 00:04:02.373015 containerd[1483]: time="2025-11-08T00:04:02.372901188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cxpqj,Uid:8027ad8b-f646-4861-aed8-35b2e3d85698,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222\"" Nov 8 00:04:02.661634 containerd[1483]: time="2025-11-08T00:04:02.661465982Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:02.664193 containerd[1483]: time="2025-11-08T00:04:02.664108359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:04:02.664362 containerd[1483]: time="2025-11-08T00:04:02.664300680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:04:02.666429 kubelet[2577]: E1108 00:04:02.665131 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:04:02.666429 kubelet[2577]: E1108 00:04:02.665224 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:04:02.666429 kubelet[2577]: E1108 00:04:02.665551 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:02.668980 containerd[1483]: time="2025-11-08T00:04:02.667594941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:04:02.704323 systemd-networkd[1378]: califd14a8c7bb4: Gained IPv6LL Nov 8 00:04:02.713858 kubelet[2577]: E1108 00:04:02.713771 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:04:02.714305 kubelet[2577]: E1108 00:04:02.713904 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:04:02.762001 kubelet[2577]: I1108 00:04:02.761920 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tzfvr" podStartSLOduration=49.761901021 podStartE2EDuration="49.761901021s" podCreationTimestamp="2025-11-08 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:04:02.735485893 +0000 UTC m=+57.521063866" watchObservedRunningTime="2025-11-08 00:04:02.761901021 +0000 UTC m=+57.547478914" Nov 8 00:04:02.958773 systemd-networkd[1378]: calie12dc25f3c9: Gained IPv6LL Nov 8 00:04:03.013183 containerd[1483]: time="2025-11-08T00:04:03.013101994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:03.017376 containerd[1483]: time="2025-11-08T00:04:03.016727182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:04:03.017376 containerd[1483]: time="2025-11-08T00:04:03.017057704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:03.019755 kubelet[2577]: E1108 00:04:03.018649 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:04:03.019755 kubelet[2577]: E1108 00:04:03.018703 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:04:03.019755 kubelet[2577]: E1108 00:04:03.018927 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnlj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cxpqj_calico-system(8027ad8b-f646-4861-aed8-35b2e3d85698): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:03.020331 kubelet[2577]: E1108 00:04:03.020195 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:04:03.024433 containerd[1483]: time="2025-11-08T00:04:03.021408858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:04:03.032054 systemd-networkd[1378]: cali65c1cfb4bde: Gained IPv6LL Nov 8 00:04:03.471679 containerd[1483]: time="2025-11-08T00:04:03.471595173Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:03.473139 containerd[1483]: time="2025-11-08T00:04:03.473017424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:04:03.473360 containerd[1483]: time="2025-11-08T00:04:03.473066385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:04:03.473464 kubelet[2577]: E1108 00:04:03.473387 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:04:03.473464 kubelet[2577]: E1108 00:04:03.473457 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:04:03.473689 kubelet[2577]: E1108 00:04:03.473636 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:03.476007 kubelet[2577]: E1108 00:04:03.475952 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:04:03.714877 kubelet[2577]: E1108 00:04:03.714681 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:04:03.715530 kubelet[2577]: E1108 00:04:03.715176 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:04:03.983632 systemd-networkd[1378]: calicdd7fe47296: Gained IPv6LL Nov 8 00:04:04.110148 systemd-networkd[1378]: cali7a11a6a9a3d: Gained IPv6LL Nov 8 00:04:05.354805 containerd[1483]: time="2025-11-08T00:04:05.354755230Z" level=info msg="StopPodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\"" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.422 [WARNING][4903] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a33abd5-ae6f-4042-bbab-6affce6535d7", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13", Pod:"csi-node-driver-f6hbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7fe47296", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.422 [INFO][4903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.422 [INFO][4903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" iface="eth0" netns="" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.422 [INFO][4903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.422 [INFO][4903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.487 [INFO][4912] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.488 [INFO][4912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.488 [INFO][4912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.513 [WARNING][4912] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.514 [INFO][4912] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.517 [INFO][4912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:05.523666 containerd[1483]: 2025-11-08 00:04:05.520 [INFO][4903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.523666 containerd[1483]: time="2025-11-08T00:04:05.523189568Z" level=info msg="TearDown network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" successfully" Nov 8 00:04:05.523666 containerd[1483]: time="2025-11-08T00:04:05.523215968Z" level=info msg="StopPodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" returns successfully" Nov 8 00:04:05.524588 containerd[1483]: time="2025-11-08T00:04:05.524564502Z" level=info msg="RemovePodSandbox for \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\"" Nov 8 00:04:05.524631 containerd[1483]: time="2025-11-08T00:04:05.524598223Z" level=info msg="Forcibly stopping sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\"" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.578 [WARNING][4926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a33abd5-ae6f-4042-bbab-6affce6535d7", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"834fdeb2f905906aea17c7927d4bdbfcbfd299ad085508184ddb007f30e0ff13", Pod:"csi-node-driver-f6hbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicdd7fe47296", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.580 [INFO][4926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.580 [INFO][4926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" iface="eth0" netns="" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.580 [INFO][4926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.580 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.609 [INFO][4933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.609 [INFO][4933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.609 [INFO][4933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.626 [WARNING][4933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.626 [INFO][4933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" HandleID="k8s-pod-network.ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Workload="ci--4081--3--6--n--8957f209ae-k8s-csi--node--driver--f6hbs-eth0" Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.632 [INFO][4933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:05.643543 containerd[1483]: 2025-11-08 00:04:05.638 [INFO][4926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd" Nov 8 00:04:05.643543 containerd[1483]: time="2025-11-08T00:04:05.643237687Z" level=info msg="TearDown network for sandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" successfully" Nov 8 00:04:05.651782 containerd[1483]: time="2025-11-08T00:04:05.651329490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:05.651782 containerd[1483]: time="2025-11-08T00:04:05.651423371Z" level=info msg="RemovePodSandbox \"ad049ba2fa039184024375aa90077cf0de3bc88b76663b662ec5d8f9c36941dd\" returns successfully" Nov 8 00:04:05.652467 containerd[1483]: time="2025-11-08T00:04:05.652434982Z" level=info msg="StopPodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\"" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.703 [WARNING][4951] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8027ad8b-f646-4861-aed8-35b2e3d85698", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222", Pod:"goldmane-666569f655-cxpqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a11a6a9a3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.703 [INFO][4951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.703 [INFO][4951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" iface="eth0" netns="" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.703 [INFO][4951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.703 [INFO][4951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.742 [INFO][4958] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.742 [INFO][4958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.742 [INFO][4958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.754 [WARNING][4958] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.754 [INFO][4958] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.756 [INFO][4958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:05.762077 containerd[1483]: 2025-11-08 00:04:05.760 [INFO][4951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.763505 containerd[1483]: time="2025-11-08T00:04:05.762794720Z" level=info msg="TearDown network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" successfully" Nov 8 00:04:05.763505 containerd[1483]: time="2025-11-08T00:04:05.762835761Z" level=info msg="StopPodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" returns successfully" Nov 8 00:04:05.764314 containerd[1483]: time="2025-11-08T00:04:05.764231295Z" level=info msg="RemovePodSandbox for \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\"" Nov 8 00:04:05.764415 containerd[1483]: time="2025-11-08T00:04:05.764320856Z" level=info msg="Forcibly stopping sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\"" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.814 [WARNING][4972] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8027ad8b-f646-4861-aed8-35b2e3d85698", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"3c862798816f65c88ce61fb8a5d5839a0fbd7fb2ce425e45a80575305813e222", Pod:"goldmane-666569f655-cxpqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a11a6a9a3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.814 [INFO][4972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.814 [INFO][4972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" iface="eth0" netns="" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.814 [INFO][4972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.814 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.841 [INFO][4979] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.842 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.842 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.854 [WARNING][4979] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.854 [INFO][4979] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" HandleID="k8s-pod-network.a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Workload="ci--4081--3--6--n--8957f209ae-k8s-goldmane--666569f655--cxpqj-eth0" Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.856 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:05.862370 containerd[1483]: 2025-11-08 00:04:05.860 [INFO][4972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8" Nov 8 00:04:05.862370 containerd[1483]: time="2025-11-08T00:04:05.862207786Z" level=info msg="TearDown network for sandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" successfully" Nov 8 00:04:05.867046 containerd[1483]: time="2025-11-08T00:04:05.866860594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:05.867046 containerd[1483]: time="2025-11-08T00:04:05.866944355Z" level=info msg="RemovePodSandbox \"a948b50699847df9bed0596f82985afd24f1880ac29e4bc04e707a40149f5fc8\" returns successfully" Nov 8 00:04:05.867851 containerd[1483]: time="2025-11-08T00:04:05.867565961Z" level=info msg="StopPodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\"" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.913 [WARNING][4993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b199b53-44ba-445d-8690-b906dab10cbb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f", Pod:"coredns-674b8bbfcf-q8wbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03cc715cf58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.914 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.914 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" iface="eth0" netns="" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.914 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.914 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.937 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.938 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.938 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.949 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.949 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.951 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:05.959640 containerd[1483]: 2025-11-08 00:04:05.955 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:05.961530 containerd[1483]: time="2025-11-08T00:04:05.959672592Z" level=info msg="TearDown network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" successfully" Nov 8 00:04:05.961530 containerd[1483]: time="2025-11-08T00:04:05.959714112Z" level=info msg="StopPodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" returns successfully" Nov 8 00:04:05.961530 containerd[1483]: time="2025-11-08T00:04:05.960488240Z" level=info msg="RemovePodSandbox for \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\"" Nov 8 00:04:05.961530 containerd[1483]: time="2025-11-08T00:04:05.960559561Z" level=info msg="Forcibly stopping sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\"" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.020 [WARNING][5015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b199b53-44ba-445d-8690-b906dab10cbb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"a73b0ad8ab1d2e66070aa634803e5a21c903f995ddba448a5efb306f3ba10c7f", Pod:"coredns-674b8bbfcf-q8wbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03cc715cf58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.020 [INFO][5015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.020 [INFO][5015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" iface="eth0" netns="" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.020 [INFO][5015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.020 [INFO][5015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.055 [INFO][5022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.055 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.055 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.066 [WARNING][5022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.066 [INFO][5022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" HandleID="k8s-pod-network.c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--q8wbq-eth0" Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.069 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.073485 containerd[1483]: 2025-11-08 00:04:06.071 [INFO][5015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469" Nov 8 00:04:06.074120 containerd[1483]: time="2025-11-08T00:04:06.073533776Z" level=info msg="TearDown network for sandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" successfully" Nov 8 00:04:06.080619 containerd[1483]: time="2025-11-08T00:04:06.080445576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:06.080619 containerd[1483]: time="2025-11-08T00:04:06.080527417Z" level=info msg="RemovePodSandbox \"c8b06a5121e29513d42be4171564f61fd6e6776a3087a876565d6384a09ca469\" returns successfully" Nov 8 00:04:06.081455 containerd[1483]: time="2025-11-08T00:04:06.081375507Z" level=info msg="StopPodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\"" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.126 [WARNING][5044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"879786e9-e895-409c-b334-437a5736f56f", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05", Pod:"coredns-674b8bbfcf-tzfvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12dc25f3c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.127 [INFO][5044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.127 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" iface="eth0" netns="" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.127 [INFO][5044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.127 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.154 [INFO][5051] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.154 [INFO][5051] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.155 [INFO][5051] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.167 [WARNING][5051] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.167 [INFO][5051] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.170 [INFO][5051] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.174568 containerd[1483]: 2025-11-08 00:04:06.172 [INFO][5044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.175275 containerd[1483]: time="2025-11-08T00:04:06.174620185Z" level=info msg="TearDown network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" successfully" Nov 8 00:04:06.175275 containerd[1483]: time="2025-11-08T00:04:06.174649705Z" level=info msg="StopPodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" returns successfully" Nov 8 00:04:06.176056 containerd[1483]: time="2025-11-08T00:04:06.175542275Z" level=info msg="RemovePodSandbox for \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\"" Nov 8 00:04:06.176056 containerd[1483]: time="2025-11-08T00:04:06.175588396Z" level=info msg="Forcibly stopping sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\"" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.226 [WARNING][5065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"879786e9-e895-409c-b334-437a5736f56f", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"563a3d5df5b40b77cebff86a43847422d9bafab15234971214d2a47452959b05", Pod:"coredns-674b8bbfcf-tzfvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12dc25f3c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.229 [INFO][5065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.229 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" iface="eth0" netns="" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.229 [INFO][5065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.229 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.251 [INFO][5072] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.251 [INFO][5072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.251 [INFO][5072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.265 [WARNING][5072] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.265 [INFO][5072] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" HandleID="k8s-pod-network.8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Workload="ci--4081--3--6--n--8957f209ae-k8s-coredns--674b8bbfcf--tzfvr-eth0" Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.267 [INFO][5072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.271639 containerd[1483]: 2025-11-08 00:04:06.268 [INFO][5065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3" Nov 8 00:04:06.271639 containerd[1483]: time="2025-11-08T00:04:06.270925378Z" level=info msg="TearDown network for sandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" successfully" Nov 8 00:04:06.296630 containerd[1483]: time="2025-11-08T00:04:06.296570314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:06.296630 containerd[1483]: time="2025-11-08T00:04:06.296639115Z" level=info msg="RemovePodSandbox \"8ac16411ba1a8aa0e0b2072ceddb90aa86eddb52f54f879866f70ef4c29ddcc3\" returns successfully" Nov 8 00:04:06.298146 containerd[1483]: time="2025-11-08T00:04:06.297544925Z" level=info msg="StopPodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\"" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.341 [WARNING][5086] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b", Pod:"calico-apiserver-5bbbbfdffc-6m8tj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65c1cfb4bde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.341 [INFO][5086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.341 [INFO][5086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" iface="eth0" netns="" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.341 [INFO][5086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.341 [INFO][5086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.364 [INFO][5093] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.364 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.364 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.375 [WARNING][5093] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.375 [INFO][5093] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.377 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.382998 containerd[1483]: 2025-11-08 00:04:06.379 [INFO][5086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.385063 containerd[1483]: time="2025-11-08T00:04:06.383074194Z" level=info msg="TearDown network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" successfully" Nov 8 00:04:06.385063 containerd[1483]: time="2025-11-08T00:04:06.383177875Z" level=info msg="StopPodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" returns successfully" Nov 8 00:04:06.385063 containerd[1483]: time="2025-11-08T00:04:06.383886043Z" level=info msg="RemovePodSandbox for \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\"" Nov 8 00:04:06.385063 containerd[1483]: time="2025-11-08T00:04:06.383918643Z" level=info msg="Forcibly stopping sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\"" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.444 [WARNING][5107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"33c6cc2864f23a5411e730b45590fe97b74e52543c1c3808575135fe0560a20b", Pod:"calico-apiserver-5bbbbfdffc-6m8tj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65c1cfb4bde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.445 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.446 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" iface="eth0" netns="" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.446 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.446 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.473 [INFO][5115] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.473 [INFO][5115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.473 [INFO][5115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.486 [WARNING][5115] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.486 [INFO][5115] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" HandleID="k8s-pod-network.46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--6m8tj-eth0" Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.488 [INFO][5115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.492617 containerd[1483]: 2025-11-08 00:04:06.490 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed" Nov 8 00:04:06.493099 containerd[1483]: time="2025-11-08T00:04:06.492666740Z" level=info msg="TearDown network for sandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" successfully" Nov 8 00:04:06.496052 containerd[1483]: time="2025-11-08T00:04:06.495969538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:06.496052 containerd[1483]: time="2025-11-08T00:04:06.496044539Z" level=info msg="RemovePodSandbox \"46938422de2f091c7160d5181b46d3dcb60280264d87f8e8606429a7d72c00ed\" returns successfully" Nov 8 00:04:06.496977 containerd[1483]: time="2025-11-08T00:04:06.496918469Z" level=info msg="StopPodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\"" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.539 [WARNING][5129] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.540 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.540 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" iface="eth0" netns="" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.540 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.540 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.565 [INFO][5136] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.565 [INFO][5136] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.565 [INFO][5136] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.576 [WARNING][5136] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.576 [INFO][5136] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.578 [INFO][5136] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.583755 containerd[1483]: 2025-11-08 00:04:06.580 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.583755 containerd[1483]: time="2025-11-08T00:04:06.583422669Z" level=info msg="TearDown network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" successfully" Nov 8 00:04:06.583755 containerd[1483]: time="2025-11-08T00:04:06.583451309Z" level=info msg="StopPodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" returns successfully" Nov 8 00:04:06.585579 containerd[1483]: time="2025-11-08T00:04:06.584587082Z" level=info msg="RemovePodSandbox for \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\"" Nov 8 00:04:06.585579 containerd[1483]: time="2025-11-08T00:04:06.584623283Z" level=info msg="Forcibly stopping sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\"" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.633 [WARNING][5151] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" WorkloadEndpoint="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.633 [INFO][5151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.633 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" iface="eth0" netns="" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.633 [INFO][5151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.633 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.656 [INFO][5158] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.656 [INFO][5158] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.656 [INFO][5158] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.668 [WARNING][5158] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.668 [INFO][5158] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" HandleID="k8s-pod-network.599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Workload="ci--4081--3--6--n--8957f209ae-k8s-whisker--57bfc8bc85--4x5vn-eth0" Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.670 [INFO][5158] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.674020 containerd[1483]: 2025-11-08 00:04:06.672 [INFO][5151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006" Nov 8 00:04:06.674985 containerd[1483]: time="2025-11-08T00:04:06.674516242Z" level=info msg="TearDown network for sandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" successfully" Nov 8 00:04:06.699364 containerd[1483]: time="2025-11-08T00:04:06.699308728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:06.699621 containerd[1483]: time="2025-11-08T00:04:06.699600371Z" level=info msg="RemovePodSandbox \"599573a8b7bd625426aa4c9c57a65069076cc38814ae48209a44aeac791bb006\" returns successfully" Nov 8 00:04:06.700548 containerd[1483]: time="2025-11-08T00:04:06.700460741Z" level=info msg="StopPodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\"" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.758 [WARNING][5173] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"36248f5d-e7be-4c9e-8bf1-2e53872f633b", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33", Pod:"calico-apiserver-5bbbbfdffc-b22vr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0929bfd3111", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.758 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.758 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" iface="eth0" netns="" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.758 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.759 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.787 [INFO][5180] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.788 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.788 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.799 [WARNING][5180] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.800 [INFO][5180] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.802 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.806679 containerd[1483]: 2025-11-08 00:04:06.804 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.807883 containerd[1483]: time="2025-11-08T00:04:06.806735209Z" level=info msg="TearDown network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" successfully" Nov 8 00:04:06.807883 containerd[1483]: time="2025-11-08T00:04:06.806771250Z" level=info msg="StopPodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" returns successfully" Nov 8 00:04:06.807883 containerd[1483]: time="2025-11-08T00:04:06.807450698Z" level=info msg="RemovePodSandbox for \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\"" Nov 8 00:04:06.807883 containerd[1483]: time="2025-11-08T00:04:06.807494538Z" level=info msg="Forcibly stopping sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\"" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.852 [WARNING][5194] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0", GenerateName:"calico-apiserver-5bbbbfdffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"36248f5d-e7be-4c9e-8bf1-2e53872f633b", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbbfdffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"f88b93a663c494232eacf4b5503f70fa17f13ec54c869abe01cfe43f53a7ce33", Pod:"calico-apiserver-5bbbbfdffc-b22vr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0929bfd3111", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.852 [INFO][5194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.852 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" iface="eth0" netns="" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.852 [INFO][5194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.852 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.874 [INFO][5202] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.875 [INFO][5202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.875 [INFO][5202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.891 [WARNING][5202] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.891 [INFO][5202] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" HandleID="k8s-pod-network.bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--apiserver--5bbbbfdffc--b22vr-eth0" Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.894 [INFO][5202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:06.900974 containerd[1483]: 2025-11-08 00:04:06.899 [INFO][5194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a" Nov 8 00:04:06.901481 containerd[1483]: time="2025-11-08T00:04:06.901031139Z" level=info msg="TearDown network for sandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" successfully" Nov 8 00:04:06.906091 containerd[1483]: time="2025-11-08T00:04:06.906018157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:06.906262 containerd[1483]: time="2025-11-08T00:04:06.906110878Z" level=info msg="RemovePodSandbox \"bd21e85fe269ad782d3ced0b85d5def45a148030a083dc69dcec4792be31e05a\" returns successfully" Nov 8 00:04:06.907350 containerd[1483]: time="2025-11-08T00:04:06.906866487Z" level=info msg="StopPodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\"" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.957 [WARNING][5216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0", GenerateName:"calico-kube-controllers-6c87cb4cfb-", Namespace:"calico-system", SelfLink:"", UID:"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c87cb4cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526", Pod:"calico-kube-controllers-6c87cb4cfb-m9pm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd14a8c7bb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.957 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.957 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" iface="eth0" netns="" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.957 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.957 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.989 [INFO][5223] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.989 [INFO][5223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:06.989 [INFO][5223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:07.000 [WARNING][5223] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:07.000 [INFO][5223] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:07.003 [INFO][5223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:07.008630 containerd[1483]: 2025-11-08 00:04:07.006 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.009476 containerd[1483]: time="2025-11-08T00:04:07.009211999Z" level=info msg="TearDown network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" successfully" Nov 8 00:04:07.009476 containerd[1483]: time="2025-11-08T00:04:07.009246720Z" level=info msg="StopPodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" returns successfully" Nov 8 00:04:07.010199 containerd[1483]: time="2025-11-08T00:04:07.010106171Z" level=info msg="RemovePodSandbox for \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\"" Nov 8 00:04:07.010622 containerd[1483]: time="2025-11-08T00:04:07.010349734Z" level=info msg="Forcibly stopping sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\"" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.054 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0", GenerateName:"calico-kube-controllers-6c87cb4cfb-", Namespace:"calico-system", SelfLink:"", UID:"d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c87cb4cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8957f209ae", ContainerID:"fabacd6476741a5221c29f179d6a0dfb4d11de60e78caa4ed3180d12cf9ad526", Pod:"calico-kube-controllers-6c87cb4cfb-m9pm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd14a8c7bb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.055 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.055 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" iface="eth0" netns="" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.055 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.055 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.086 [INFO][5244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.086 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.086 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.096 [WARNING][5244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.097 [INFO][5244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" HandleID="k8s-pod-network.9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Workload="ci--4081--3--6--n--8957f209ae-k8s-calico--kube--controllers--6c87cb4cfb--m9pm4-eth0" Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.099 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:04:07.104084 containerd[1483]: 2025-11-08 00:04:07.102 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c" Nov 8 00:04:07.104630 containerd[1483]: time="2025-11-08T00:04:07.104134130Z" level=info msg="TearDown network for sandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" successfully" Nov 8 00:04:07.108231 containerd[1483]: time="2025-11-08T00:04:07.108110461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:04:07.108231 containerd[1483]: time="2025-11-08T00:04:07.108193502Z" level=info msg="RemovePodSandbox \"9deffb07d39de8bbe4cbdf136b30b504a052b3f3723e58e93ef4ec743ef3b01c\" returns successfully" Nov 8 00:04:09.369731 containerd[1483]: time="2025-11-08T00:04:09.369678435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:04:09.709608 containerd[1483]: time="2025-11-08T00:04:09.709499787Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:09.711460 containerd[1483]: time="2025-11-08T00:04:09.711387575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:04:09.711596 containerd[1483]: time="2025-11-08T00:04:09.711504537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:04:09.711811 kubelet[2577]: E1108 00:04:09.711692 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:04:09.711811 kubelet[2577]: E1108 00:04:09.711761 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:04:09.712305 kubelet[2577]: E1108 00:04:09.711882 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eccc3e646eb4756b217ff171cbc1340,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:09.715210 containerd[1483]: time="2025-11-08T00:04:09.715156192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:04:10.046517 containerd[1483]: time="2025-11-08T00:04:10.045080444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:10.047532 containerd[1483]: time="2025-11-08T00:04:10.047310080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:04:10.047532 containerd[1483]: time="2025-11-08T00:04:10.047439722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:04:10.047749 kubelet[2577]: E1108 00:04:10.047598 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:04:10.047749 kubelet[2577]: E1108 00:04:10.047644 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:04:10.048135 kubelet[2577]: E1108 00:04:10.047757 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:10.049157 kubelet[2577]: E1108 00:04:10.049014 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:04:13.364442 containerd[1483]: time="2025-11-08T00:04:13.364260048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:04:13.705875 containerd[1483]: time="2025-11-08T00:04:13.705736246Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:13.708159 containerd[1483]: time="2025-11-08T00:04:13.708079051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:04:13.708301 containerd[1483]: time="2025-11-08T00:04:13.708246295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:13.709307 kubelet[2577]: E1108 00:04:13.708540 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:13.709307 kubelet[2577]: E1108 00:04:13.708607 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:13.709307 kubelet[2577]: E1108 00:04:13.708777 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzlmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:13.710154 kubelet[2577]: E1108 00:04:13.710060 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:04:15.370468 containerd[1483]: time="2025-11-08T00:04:15.370364876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:04:15.725799 containerd[1483]: time="2025-11-08T00:04:15.725554290Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:15.728076 containerd[1483]: time="2025-11-08T00:04:15.727439650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:04:15.728076 containerd[1483]: time="2025-11-08T00:04:15.727508131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:15.730760 kubelet[2577]: E1108 00:04:15.730092 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:15.730760 kubelet[2577]: E1108 00:04:15.730135 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:15.734015 kubelet[2577]: E1108 00:04:15.730335 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:15.734673 containerd[1483]: time="2025-11-08T00:04:15.733964867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:04:15.735327 kubelet[2577]: E1108 00:04:15.735274 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:04:16.087579 containerd[1483]: time="2025-11-08T00:04:16.087119676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:16.089551 containerd[1483]: time="2025-11-08T00:04:16.089418246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:04:16.089551 containerd[1483]: time="2025-11-08T00:04:16.089507088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:04:16.089821 kubelet[2577]: E1108 00:04:16.089693 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:04:16.089821 kubelet[2577]: E1108 00:04:16.089746 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:04:16.090670 kubelet[2577]: E1108 00:04:16.090152 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:16.093707 containerd[1483]: time="2025-11-08T00:04:16.093622059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:04:16.463487 containerd[1483]: time="2025-11-08T00:04:16.463363953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:16.464990 containerd[1483]: time="2025-11-08T00:04:16.464901906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:04:16.465141 containerd[1483]: time="2025-11-08T00:04:16.465033149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:04:16.465519 kubelet[2577]: E1108 00:04:16.465376 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:04:16.465519 kubelet[2577]: E1108 00:04:16.465464 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:04:16.465738 kubelet[2577]: E1108 00:04:16.465610 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:16.466993 kubelet[2577]: E1108 00:04:16.466878 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:04:17.371126 containerd[1483]: time="2025-11-08T00:04:17.370911960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:04:17.729211 containerd[1483]: time="2025-11-08T00:04:17.729038431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:17.734441 containerd[1483]: time="2025-11-08T00:04:17.733547894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:04:17.734441 containerd[1483]: time="2025-11-08T00:04:17.733665297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:04:17.734615 kubelet[2577]: E1108 00:04:17.733814 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:04:17.734615 kubelet[2577]: E1108 00:04:17.733861 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:04:17.737804 kubelet[2577]: E1108 00:04:17.735168 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgh2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:17.737804 kubelet[2577]: E1108 00:04:17.737124 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:04:17.738066 containerd[1483]: time="2025-11-08T00:04:17.737697589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:04:18.093020 containerd[1483]: time="2025-11-08T00:04:18.092529742Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:18.095639 containerd[1483]: time="2025-11-08T00:04:18.095564134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:04:18.096655 containerd[1483]: time="2025-11-08T00:04:18.095698697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:18.097424 kubelet[2577]: E1108 00:04:18.095904 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:04:18.097424 kubelet[2577]: E1108 00:04:18.095988 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:04:18.097424 kubelet[2577]: E1108 00:04:18.096188 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnlj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cxpqj_calico-system(8027ad8b-f646-4861-aed8-35b2e3d85698): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:18.097879 kubelet[2577]: E1108 00:04:18.097445 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:04:23.370857 kubelet[2577]: E1108 00:04:23.370681 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:04:27.364379 kubelet[2577]: E1108 00:04:27.364213 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:04:27.366507 kubelet[2577]: E1108 00:04:27.366449 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:04:29.369857 kubelet[2577]: E1108 00:04:29.369794 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:04:30.366468 kubelet[2577]: E1108 00:04:30.364815 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:04:32.369039 kubelet[2577]: E1108 00:04:32.368972 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:04:36.364527 containerd[1483]: time="2025-11-08T00:04:36.364472680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:04:36.742233 containerd[1483]: time="2025-11-08T00:04:36.742015224Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:36.744427 containerd[1483]: time="2025-11-08T00:04:36.744353426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:04:36.744570 containerd[1483]: time="2025-11-08T00:04:36.744475911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:04:36.744689 kubelet[2577]: E1108 00:04:36.744640 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:04:36.745473 kubelet[2577]: E1108 00:04:36.744688 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:04:36.745473 kubelet[2577]: E1108 00:04:36.744808 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eccc3e646eb4756b217ff171cbc1340,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:36.746965 containerd[1483]: time="2025-11-08T00:04:36.746912076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:04:37.096551 containerd[1483]: time="2025-11-08T00:04:37.096403759Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:37.098922 containerd[1483]: time="2025-11-08T00:04:37.098716522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:04:37.098922 containerd[1483]: time="2025-11-08T00:04:37.098856327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:04:37.099215 kubelet[2577]: E1108 00:04:37.099109 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:04:37.099215 kubelet[2577]: E1108 00:04:37.099156 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:04:37.099337 kubelet[2577]: E1108 00:04:37.099277 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:37.100966 kubelet[2577]: E1108 00:04:37.100870 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:04:38.366359 containerd[1483]: time="2025-11-08T00:04:38.366309687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:04:38.704968 containerd[1483]: time="2025-11-08T00:04:38.704859810Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:38.706261 containerd[1483]: time="2025-11-08T00:04:38.706106055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:04:38.706261 containerd[1483]: time="2025-11-08T00:04:38.706223460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:38.706613 kubelet[2577]: E1108 00:04:38.706410 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:38.706613 kubelet[2577]: E1108 00:04:38.706473 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:38.707512 kubelet[2577]: E1108 00:04:38.706638 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:38.708375 kubelet[2577]: E1108 00:04:38.708319 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:04:41.371172 containerd[1483]: time="2025-11-08T00:04:41.370464608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:04:41.772776 containerd[1483]: time="2025-11-08T00:04:41.772697974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:41.774192 containerd[1483]: time="2025-11-08T00:04:41.774131707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:04:41.774338 containerd[1483]: time="2025-11-08T00:04:41.774241392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:41.774474 kubelet[2577]: E1108 00:04:41.774428 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:41.774761 kubelet[2577]: E1108 00:04:41.774486 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:04:41.774761 kubelet[2577]: E1108 00:04:41.774621 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzlmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:41.776225 kubelet[2577]: E1108 00:04:41.776156 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:04:43.368058 containerd[1483]: time="2025-11-08T00:04:43.367504552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:04:43.727327 containerd[1483]: time="2025-11-08T00:04:43.726927481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:43.728601 containerd[1483]: time="2025-11-08T00:04:43.728557183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:04:43.728980 containerd[1483]: time="2025-11-08T00:04:43.728659546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:04:43.729033 kubelet[2577]: E1108 00:04:43.728845 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:04:43.729033 kubelet[2577]: E1108 00:04:43.728896 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:04:43.731963 kubelet[2577]: E1108 00:04:43.729153 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgh2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:43.731963 kubelet[2577]: E1108 00:04:43.731005 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:04:44.365946 containerd[1483]: time="2025-11-08T00:04:44.365881550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:04:44.732886 containerd[1483]: time="2025-11-08T00:04:44.732688456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:44.737972 containerd[1483]: time="2025-11-08T00:04:44.736332956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:04:44.737972 containerd[1483]: time="2025-11-08T00:04:44.736509203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:04:44.739256 kubelet[2577]: E1108 00:04:44.736668 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:04:44.739256 kubelet[2577]: E1108 00:04:44.736788 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:04:44.739256 kubelet[2577]: E1108 00:04:44.736957 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:44.741878 containerd[1483]: time="2025-11-08T00:04:44.741826687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:04:45.103233 containerd[1483]: time="2025-11-08T00:04:45.101981093Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:45.104987 containerd[1483]: time="2025-11-08T00:04:45.104818564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:04:45.104987 containerd[1483]: time="2025-11-08T00:04:45.104889406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:04:45.105196 kubelet[2577]: E1108 00:04:45.105101 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:04:45.105196 kubelet[2577]: E1108 00:04:45.105163 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:04:45.105342 kubelet[2577]: E1108 00:04:45.105289 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:45.107990 kubelet[2577]: E1108 00:04:45.106783 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:04:45.369877 containerd[1483]: time="2025-11-08T00:04:45.369507517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:04:45.745396 containerd[1483]: time="2025-11-08T00:04:45.745214460Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:04:45.747695 containerd[1483]: time="2025-11-08T00:04:45.747543391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:04:45.748032 containerd[1483]: time="2025-11-08T00:04:45.747619554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:04:45.748720 kubelet[2577]: E1108 00:04:45.748463 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:04:45.748720 kubelet[2577]: E1108 00:04:45.748549 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:04:45.748720 kubelet[2577]: E1108 00:04:45.748721 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnlj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cxpqj_calico-system(8027ad8b-f646-4861-aed8-35b2e3d85698): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:04:45.751063 kubelet[2577]: E1108 00:04:45.750415 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:04:49.367094 kubelet[2577]: E1108 00:04:49.367024 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:04:49.369779 kubelet[2577]: E1108 00:04:49.369696 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:04:55.371859 kubelet[2577]: E1108 00:04:55.371585 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:04:55.674839 systemd[1]: run-containerd-runc-k8s.io-8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24-runc.H9oJqx.mount: Deactivated successfully. Nov 8 00:04:57.365769 kubelet[2577]: E1108 00:04:57.365561 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:04:58.365861 kubelet[2577]: E1108 00:04:58.365801 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:04:59.365986 kubelet[2577]: E1108 00:04:59.364884 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:05:04.365502 kubelet[2577]: E1108 00:05:04.363103 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:05:04.369783 kubelet[2577]: E1108 00:05:04.369739 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:05:08.364512 kubelet[2577]: E1108 00:05:08.364220 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:05:09.365886 kubelet[2577]: E1108 00:05:09.365690 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:05:10.366601 kubelet[2577]: E1108 00:05:10.366262 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:05:10.366601 kubelet[2577]: E1108 00:05:10.366533 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:05:16.366228 kubelet[2577]: E1108 00:05:16.366166 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:05:19.370215 containerd[1483]: time="2025-11-08T00:05:19.370168917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:05:19.709781 containerd[1483]: time="2025-11-08T00:05:19.709714871Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:19.711266 containerd[1483]: time="2025-11-08T00:05:19.711157177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:05:19.712063 containerd[1483]: time="2025-11-08T00:05:19.711497233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:05:19.712153 kubelet[2577]: E1108 00:05:19.711769 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:05:19.712153 kubelet[2577]: E1108 00:05:19.711815 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:05:19.715049 kubelet[2577]: E1108 00:05:19.711949 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:19.716189 kubelet[2577]: E1108 00:05:19.716132 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:05:21.366497 kubelet[2577]: E1108 00:05:21.366444 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:05:22.365687 kubelet[2577]: E1108 00:05:22.365559 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:05:25.369662 containerd[1483]: time="2025-11-08T00:05:25.369342109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:05:25.371830 kubelet[2577]: E1108 00:05:25.370467 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:05:25.681600 systemd[1]: run-containerd-runc-k8s.io-8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24-runc.ZzG3NO.mount: Deactivated successfully. Nov 8 00:05:25.708968 containerd[1483]: time="2025-11-08T00:05:25.707418176Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:25.711107 containerd[1483]: time="2025-11-08T00:05:25.711032346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:05:25.711393 containerd[1483]: time="2025-11-08T00:05:25.711102269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:05:25.712268 kubelet[2577]: E1108 00:05:25.711632 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:05:25.712268 kubelet[2577]: E1108 00:05:25.711689 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:05:25.712268 kubelet[2577]: E1108 00:05:25.711838 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgh2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:25.713393 kubelet[2577]: E1108 00:05:25.713332 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:05:31.367500 containerd[1483]: time="2025-11-08T00:05:31.367084510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:05:31.717447 containerd[1483]: time="2025-11-08T00:05:31.717340938Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:31.719909 containerd[1483]: time="2025-11-08T00:05:31.719600365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:05:31.719909 containerd[1483]: time="2025-11-08T00:05:31.719674008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:05:31.720125 kubelet[2577]: E1108 00:05:31.719850 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:05:31.720125 kubelet[2577]: E1108 00:05:31.719899 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:05:31.720125 kubelet[2577]: E1108 00:05:31.720081 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6eccc3e646eb4756b217ff171cbc1340,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:31.723726 containerd[1483]: time="2025-11-08T00:05:31.723244377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:05:32.101204 containerd[1483]: time="2025-11-08T00:05:32.100378726Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:32.104227 containerd[1483]: time="2025-11-08T00:05:32.104127784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:05:32.104700 containerd[1483]: time="2025-11-08T00:05:32.104198708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:05:32.105114 kubelet[2577]: E1108 00:05:32.104889 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:05:32.105114 kubelet[2577]: E1108 00:05:32.104977 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:05:32.105858 kubelet[2577]: E1108 00:05:32.105782 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7546c5f69c-s8fw9_calico-system(1f74b08a-68be-4d64-8b67-dfbe823cdd4c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:32.107858 kubelet[2577]: E1108 00:05:32.107791 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:05:32.366963 kubelet[2577]: E1108 00:05:32.366567 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:05:32.367462 containerd[1483]: time="2025-11-08T00:05:32.366623157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:05:32.688843 containerd[1483]: time="2025-11-08T00:05:32.688789801Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:32.690861 containerd[1483]: time="2025-11-08T00:05:32.690730613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:05:32.690861 containerd[1483]: time="2025-11-08T00:05:32.690794496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:05:32.691994 kubelet[2577]: E1108 00:05:32.691597 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:05:32.691994 kubelet[2577]: E1108 00:05:32.691663 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:05:32.691994 kubelet[2577]: E1108 00:05:32.691838 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzlmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:32.693214 kubelet[2577]: E1108 00:05:32.693115 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:05:36.369296 kubelet[2577]: E1108 00:05:36.365300 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:05:37.366038 containerd[1483]: time="2025-11-08T00:05:37.365931456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:05:37.701212 containerd[1483]: time="2025-11-08T00:05:37.701093959Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:37.703881 containerd[1483]: time="2025-11-08T00:05:37.703756447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:05:37.703881 containerd[1483]: time="2025-11-08T00:05:37.703809009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:05:37.705526 kubelet[2577]: E1108 00:05:37.705337 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:05:37.705526 kubelet[2577]: E1108 00:05:37.705393 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:05:37.705907 kubelet[2577]: E1108 00:05:37.705589 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:37.706626 containerd[1483]: time="2025-11-08T00:05:37.706277527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:05:38.049313 containerd[1483]: time="2025-11-08T00:05:38.048725262Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:38.050184 containerd[1483]: time="2025-11-08T00:05:38.050083127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:05:38.050574 containerd[1483]: time="2025-11-08T00:05:38.050207933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:05:38.050671 kubelet[2577]: E1108 00:05:38.050429 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:05:38.050671 kubelet[2577]: E1108 00:05:38.050484 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:05:38.051041 containerd[1483]: time="2025-11-08T00:05:38.050812202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:05:38.051884 kubelet[2577]: E1108 00:05:38.051535 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnlj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cxpqj_calico-system(8027ad8b-f646-4861-aed8-35b2e3d85698): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:38.053447 kubelet[2577]: E1108 00:05:38.053051 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:05:38.394388 containerd[1483]: time="2025-11-08T00:05:38.394225802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:05:38.395763 containerd[1483]: time="2025-11-08T00:05:38.395611069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:05:38.395763 containerd[1483]: time="2025-11-08T00:05:38.395739955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:05:38.396542 kubelet[2577]: E1108 00:05:38.396004 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:05:38.396542 kubelet[2577]: E1108 00:05:38.396086 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:05:38.396542 kubelet[2577]: E1108 00:05:38.396251 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f6hbs_calico-system(6a33abd5-ae6f-4042-bbab-6affce6535d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:05:38.397606 kubelet[2577]: E1108 00:05:38.397547 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:05:43.367676 kubelet[2577]: E1108 00:05:43.367070 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:05:44.366179 kubelet[2577]: E1108 00:05:44.366051 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:05:47.366089 kubelet[2577]: E1108 00:05:47.365885 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:05:48.015579 systemd[1]: Started sshd@7-46.224.42.7:22-139.178.68.195:50576.service - OpenSSH per-connection server daemon (139.178.68.195:50576). Nov 8 00:05:48.965969 sshd[5380]: Accepted publickey for core from 139.178.68.195 port 50576 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:05:48.969441 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:48.977125 systemd-logind[1454]: New session 8 of user core. Nov 8 00:05:48.985321 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:05:49.776581 sshd[5380]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:49.781323 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:05:49.783687 systemd[1]: sshd@7-46.224.42.7:22-139.178.68.195:50576.service: Deactivated successfully. Nov 8 00:05:49.791521 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:05:49.793529 systemd-logind[1454]: Removed session 8. Nov 8 00:05:51.366662 kubelet[2577]: E1108 00:05:51.366548 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:05:52.366166 kubelet[2577]: E1108 00:05:52.365642 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:05:53.371005 kubelet[2577]: E1108 00:05:53.370873 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:05:54.363534 kubelet[2577]: E1108 00:05:54.363176 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:05:54.944416 systemd[1]: Started sshd@8-46.224.42.7:22-139.178.68.195:50978.service - OpenSSH per-connection server daemon (139.178.68.195:50978). Nov 8 00:05:55.893996 sshd[5394]: Accepted publickey for core from 139.178.68.195 port 50978 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:05:55.898135 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:05:55.905095 systemd-logind[1454]: New session 9 of user core. Nov 8 00:05:55.910329 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:05:56.660214 sshd[5394]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:56.667839 systemd[1]: sshd@8-46.224.42.7:22-139.178.68.195:50978.service: Deactivated successfully. Nov 8 00:05:56.672709 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:05:56.677016 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:05:56.679314 systemd-logind[1454]: Removed session 9. Nov 8 00:05:57.367154 kubelet[2577]: E1108 00:05:57.367102 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:05:58.366330 kubelet[2577]: E1108 00:05:58.365755 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:06:01.837815 systemd[1]: Started sshd@9-46.224.42.7:22-139.178.68.195:50992.service - OpenSSH per-connection server daemon (139.178.68.195:50992). Nov 8 00:06:02.365370 kubelet[2577]: E1108 00:06:02.365135 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:06:02.800122 sshd[5430]: Accepted publickey for core from 139.178.68.195 port 50992 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:02.803513 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:02.811226 systemd-logind[1454]: New session 10 of user core. Nov 8 00:06:02.824354 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:06:03.365870 kubelet[2577]: E1108 00:06:03.365791 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:06:03.601576 sshd[5430]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:03.608828 systemd[1]: sshd@9-46.224.42.7:22-139.178.68.195:50992.service: Deactivated successfully. Nov 8 00:06:03.612294 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:06:03.618158 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:06:03.620488 systemd-logind[1454]: Removed session 10. Nov 8 00:06:03.771343 systemd[1]: Started sshd@10-46.224.42.7:22-139.178.68.195:55512.service - OpenSSH per-connection server daemon (139.178.68.195:55512). Nov 8 00:06:04.704715 sshd[5444]: Accepted publickey for core from 139.178.68.195 port 55512 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:04.708056 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:04.717366 systemd-logind[1454]: New session 11 of user core. Nov 8 00:06:04.722604 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:06:05.508045 sshd[5444]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:05.514160 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:06:05.515768 systemd[1]: sshd@10-46.224.42.7:22-139.178.68.195:55512.service: Deactivated successfully. Nov 8 00:06:05.521854 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:06:05.525895 systemd-logind[1454]: Removed session 11. Nov 8 00:06:05.681304 systemd[1]: Started sshd@11-46.224.42.7:22-139.178.68.195:55524.service - OpenSSH per-connection server daemon (139.178.68.195:55524). Nov 8 00:06:06.648739 sshd[5464]: Accepted publickey for core from 139.178.68.195 port 55524 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:06.651288 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:06.658837 systemd-logind[1454]: New session 12 of user core. Nov 8 00:06:06.662218 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:06:07.367989 kubelet[2577]: E1108 00:06:07.365140 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:06:07.425196 sshd[5464]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:07.431257 systemd[1]: sshd@11-46.224.42.7:22-139.178.68.195:55524.service: Deactivated successfully. Nov 8 00:06:07.435916 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:06:07.437389 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:06:07.438864 systemd-logind[1454]: Removed session 12. Nov 8 00:06:08.369145 kubelet[2577]: E1108 00:06:08.367375 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:06:08.370168 kubelet[2577]: E1108 00:06:08.369886 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:06:12.601383 systemd[1]: Started sshd@12-46.224.42.7:22-139.178.68.195:55540.service - OpenSSH per-connection server daemon (139.178.68.195:55540). Nov 8 00:06:13.365875 kubelet[2577]: E1108 00:06:13.365770 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:06:13.563235 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 55540 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:13.565029 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:13.576097 systemd-logind[1454]: New session 13 of user core. Nov 8 00:06:13.580231 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:06:14.332499 sshd[5478]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:14.340231 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:06:14.341013 systemd[1]: sshd@12-46.224.42.7:22-139.178.68.195:55540.service: Deactivated successfully. Nov 8 00:06:14.346110 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:06:14.349893 systemd-logind[1454]: Removed session 13. Nov 8 00:06:14.370264 kubelet[2577]: E1108 00:06:14.369796 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:06:14.503138 systemd[1]: Started sshd@13-46.224.42.7:22-139.178.68.195:51916.service - OpenSSH per-connection server daemon (139.178.68.195:51916). Nov 8 00:06:15.451995 sshd[5493]: Accepted publickey for core from 139.178.68.195 port 51916 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:15.455390 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:15.468045 systemd-logind[1454]: New session 14 of user core. Nov 8 00:06:15.475376 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:06:16.363904 kubelet[2577]: E1108 00:06:16.363817 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:06:16.378072 sshd[5493]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:16.385171 systemd[1]: sshd@13-46.224.42.7:22-139.178.68.195:51916.service: Deactivated successfully. Nov 8 00:06:16.387726 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:06:16.388836 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:06:16.391542 systemd-logind[1454]: Removed session 14. Nov 8 00:06:16.546338 systemd[1]: Started sshd@14-46.224.42.7:22-139.178.68.195:51928.service - OpenSSH per-connection server daemon (139.178.68.195:51928). Nov 8 00:06:17.496478 sshd[5504]: Accepted publickey for core from 139.178.68.195 port 51928 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:17.499342 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:17.503977 systemd-logind[1454]: New session 15 of user core. Nov 8 00:06:17.511550 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:06:18.991406 sshd[5504]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:18.998729 systemd[1]: sshd@14-46.224.42.7:22-139.178.68.195:51928.service: Deactivated successfully. Nov 8 00:06:19.002508 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:06:19.003815 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:06:19.006449 systemd-logind[1454]: Removed session 15. Nov 8 00:06:19.166352 systemd[1]: Started sshd@15-46.224.42.7:22-139.178.68.195:51930.service - OpenSSH per-connection server daemon (139.178.68.195:51930). Nov 8 00:06:19.364852 kubelet[2577]: E1108 00:06:19.364720 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:06:20.116497 sshd[5523]: Accepted publickey for core from 139.178.68.195 port 51930 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:20.118968 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:20.126631 systemd-logind[1454]: New session 16 of user core. Nov 8 00:06:20.130168 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:06:20.364960 kubelet[2577]: E1108 00:06:20.364886 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:06:21.035119 sshd[5523]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:21.041086 systemd[1]: sshd@15-46.224.42.7:22-139.178.68.195:51930.service: Deactivated successfully. Nov 8 00:06:21.047657 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:06:21.049099 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:06:21.052374 systemd-logind[1454]: Removed session 16. Nov 8 00:06:21.204339 systemd[1]: Started sshd@16-46.224.42.7:22-139.178.68.195:51934.service - OpenSSH per-connection server daemon (139.178.68.195:51934). Nov 8 00:06:22.146075 sshd[5534]: Accepted publickey for core from 139.178.68.195 port 51934 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:22.147305 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:22.158059 systemd-logind[1454]: New session 17 of user core. Nov 8 00:06:22.162246 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:06:22.869669 sshd[5534]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:22.876016 systemd[1]: sshd@16-46.224.42.7:22-139.178.68.195:51934.service: Deactivated successfully. Nov 8 00:06:22.883385 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:06:22.884771 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:06:22.886133 systemd-logind[1454]: Removed session 17. Nov 8 00:06:23.367707 kubelet[2577]: E1108 00:06:23.367626 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:06:27.367905 kubelet[2577]: E1108 00:06:27.367685 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:06:28.044145 systemd[1]: Started sshd@17-46.224.42.7:22-139.178.68.195:56190.service - OpenSSH per-connection server daemon (139.178.68.195:56190). Nov 8 00:06:28.363317 kubelet[2577]: E1108 00:06:28.363187 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:06:28.996053 sshd[5571]: Accepted publickey for core from 139.178.68.195 port 56190 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:28.998405 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:29.005813 systemd-logind[1454]: New session 18 of user core. Nov 8 00:06:29.013146 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:06:29.366417 kubelet[2577]: E1108 00:06:29.366127 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:06:29.771804 sshd[5571]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:29.778694 systemd[1]: sshd@17-46.224.42.7:22-139.178.68.195:56190.service: Deactivated successfully. Nov 8 00:06:29.779562 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:06:29.783584 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:06:29.785680 systemd-logind[1454]: Removed session 18. Nov 8 00:06:32.364254 kubelet[2577]: E1108 00:06:32.363921 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:06:33.369217 kubelet[2577]: E1108 00:06:33.369070 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:06:34.943496 systemd[1]: Started sshd@18-46.224.42.7:22-139.178.68.195:56880.service - OpenSSH per-connection server daemon (139.178.68.195:56880). Nov 8 00:06:35.364890 kubelet[2577]: E1108 00:06:35.364429 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:06:35.889997 sshd[5583]: Accepted publickey for core from 139.178.68.195 port 56880 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:06:35.891527 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:35.901732 systemd-logind[1454]: New session 19 of user core. Nov 8 00:06:35.906192 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:06:36.629117 sshd[5583]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:36.634507 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:06:36.637037 systemd[1]: sshd@18-46.224.42.7:22-139.178.68.195:56880.service: Deactivated successfully. Nov 8 00:06:36.641036 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:06:36.643649 systemd-logind[1454]: Removed session 19. Nov 8 00:06:40.363435 kubelet[2577]: E1108 00:06:40.363272 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:06:41.365903 kubelet[2577]: E1108 00:06:41.364421 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9" Nov 8 00:06:43.365818 kubelet[2577]: E1108 00:06:43.365738 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cxpqj" podUID="8027ad8b-f646-4861-aed8-35b2e3d85698" Nov 8 00:06:45.367108 kubelet[2577]: E1108 00:06:45.366893 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7546c5f69c-s8fw9" podUID="1f74b08a-68be-4d64-8b67-dfbe823cdd4c" Nov 8 00:06:46.366765 containerd[1483]: time="2025-11-08T00:06:46.366412651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:46.707218 containerd[1483]: time="2025-11-08T00:06:46.707116473Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:46.709493 containerd[1483]: time="2025-11-08T00:06:46.709357765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:46.709493 containerd[1483]: time="2025-11-08T00:06:46.709442046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:46.709828 kubelet[2577]: E1108 00:06:46.709637 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:46.709828 kubelet[2577]: E1108 00:06:46.709702 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:46.710391 kubelet[2577]: E1108 00:06:46.709860 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-6m8tj_calico-apiserver(e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:46.711828 kubelet[2577]: E1108 00:06:46.711745 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-6m8tj" podUID="e4fb1541-9aa6-48b8-aaf8-151e44fc4a0d" Nov 8 00:06:47.366018 kubelet[2577]: E1108 00:06:47.365715 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f6hbs" podUID="6a33abd5-ae6f-4042-bbab-6affce6535d7" Nov 8 00:06:51.600850 kubelet[2577]: E1108 00:06:51.600628 2577 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48238->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-kube-controllers-6c87cb4cfb-m9pm4.1875df338de1520f calico-system 1737 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-6c87cb4cfb-m9pm4,UID:d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9,APIVersion:v1,ResourceVersion:814,FieldPath:spec.containers{calico-kube-controllers},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8957f209ae,},FirstTimestamp:2025-11-08 00:04:01 +0000 UTC,LastTimestamp:2025-11-08 00:06:41.364337736 +0000 UTC m=+216.149915669,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8957f209ae,}" Nov 8 00:06:52.326008 systemd[1]: cri-containerd-8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e.scope: Deactivated successfully. Nov 8 00:06:52.326297 systemd[1]: cri-containerd-8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e.scope: Consumed 38.702s CPU time. Nov 8 00:06:52.357327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e-rootfs.mount: Deactivated successfully. Nov 8 00:06:52.361250 containerd[1483]: time="2025-11-08T00:06:52.361153142Z" level=info msg="shim disconnected" id=8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e namespace=k8s.io Nov 8 00:06:52.361250 containerd[1483]: time="2025-11-08T00:06:52.361222302Z" level=warning msg="cleaning up after shim disconnected" id=8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e namespace=k8s.io Nov 8 00:06:52.361250 containerd[1483]: time="2025-11-08T00:06:52.361232663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:06:52.435384 kubelet[2577]: E1108 00:06:52.434835 2577 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48424->10.0.0.2:2379: read: connection timed out" Nov 8 00:06:52.440572 systemd[1]: cri-containerd-b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0.scope: Deactivated successfully. Nov 8 00:06:52.441169 systemd[1]: cri-containerd-b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0.scope: Consumed 3.614s CPU time, 16.1M memory peak, 0B memory swap peak. Nov 8 00:06:52.466283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0-rootfs.mount: Deactivated successfully. Nov 8 00:06:52.472531 containerd[1483]: time="2025-11-08T00:06:52.472424707Z" level=info msg="shim disconnected" id=b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0 namespace=k8s.io Nov 8 00:06:52.472531 containerd[1483]: time="2025-11-08T00:06:52.472479708Z" level=warning msg="cleaning up after shim disconnected" id=b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0 namespace=k8s.io Nov 8 00:06:52.472531 containerd[1483]: time="2025-11-08T00:06:52.472488588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:06:52.548153 systemd[1]: cri-containerd-94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28.scope: Deactivated successfully. Nov 8 00:06:52.548893 systemd[1]: cri-containerd-94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28.scope: Consumed 5.659s CPU time, 18.1M memory peak, 0B memory swap peak. Nov 8 00:06:52.575448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28-rootfs.mount: Deactivated successfully. Nov 8 00:06:52.583568 containerd[1483]: time="2025-11-08T00:06:52.582377302Z" level=info msg="shim disconnected" id=94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28 namespace=k8s.io Nov 8 00:06:52.583568 containerd[1483]: time="2025-11-08T00:06:52.582442783Z" level=warning msg="cleaning up after shim disconnected" id=94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28 namespace=k8s.io Nov 8 00:06:52.583568 containerd[1483]: time="2025-11-08T00:06:52.582454583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:06:53.243318 kubelet[2577]: I1108 00:06:53.242463 2577 scope.go:117] "RemoveContainer" containerID="8864c35f616e823ef08f88635e33e352a20c5bea7f99223416c4f803d438144e" Nov 8 00:06:53.247950 containerd[1483]: time="2025-11-08T00:06:53.246563462Z" level=info msg="CreateContainer within sandbox \"2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:06:53.252139 kubelet[2577]: I1108 00:06:53.251559 2577 scope.go:117] "RemoveContainer" containerID="b9317bec38c72b6fdbe412b7c8ff0704e4437cf8da1a806ae03bc2481f8096d0" Nov 8 00:06:53.257814 kubelet[2577]: I1108 00:06:53.257138 2577 scope.go:117] "RemoveContainer" containerID="94cc9148e41befea489173a3c6dbee711839d43efab857f525ac6adc642c6f28" Nov 8 00:06:53.260520 containerd[1483]: time="2025-11-08T00:06:53.260382224Z" level=info msg="CreateContainer within sandbox \"faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:06:53.261433 containerd[1483]: time="2025-11-08T00:06:53.261236391Z" level=info msg="CreateContainer within sandbox \"db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:06:53.297159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546304579.mount: Deactivated successfully. Nov 8 00:06:53.305210 containerd[1483]: time="2025-11-08T00:06:53.305129977Z" level=info msg="CreateContainer within sandbox \"2190621e2ed62d98dc8388720c814a0a5fb3d25adafde5b246c54c0f1193bc9f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0bab287b372525d78b81801fe29e3deeb35b485f512a32ee5f4939ee348f3eb6\"" Nov 8 00:06:53.305986 containerd[1483]: time="2025-11-08T00:06:53.305930424Z" level=info msg="StartContainer for \"0bab287b372525d78b81801fe29e3deeb35b485f512a32ee5f4939ee348f3eb6\"" Nov 8 00:06:53.314455 containerd[1483]: time="2025-11-08T00:06:53.314359818Z" level=info msg="CreateContainer within sandbox \"db83a925240e96955153a789ba3d942d8397eba7c91ce5c6bb32acdcbc6f10f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a44e8541452281d832ef5e516131d007af3375758b15466ffae0c90b6bc09a53\"" Nov 8 00:06:53.315582 containerd[1483]: time="2025-11-08T00:06:53.315402947Z" level=info msg="StartContainer for \"a44e8541452281d832ef5e516131d007af3375758b15466ffae0c90b6bc09a53\"" Nov 8 00:06:53.320503 containerd[1483]: time="2025-11-08T00:06:53.320441391Z" level=info msg="CreateContainer within sandbox \"faf9eb39aee186cc25f8eaf5bbe6a34d4e653a9eeda44d4ba025f7c74c366f86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"aeee22165d282061a111ef642fecf7eea6065d1963f9c62bafee8fedc3f7d0cb\"" Nov 8 00:06:53.321982 containerd[1483]: time="2025-11-08T00:06:53.321826564Z" level=info msg="StartContainer for \"aeee22165d282061a111ef642fecf7eea6065d1963f9c62bafee8fedc3f7d0cb\"" Nov 8 00:06:53.373114 systemd[1]: run-containerd-runc-k8s.io-aeee22165d282061a111ef642fecf7eea6065d1963f9c62bafee8fedc3f7d0cb-runc.703TRc.mount: Deactivated successfully. Nov 8 00:06:53.381624 systemd[1]: Started cri-containerd-0bab287b372525d78b81801fe29e3deeb35b485f512a32ee5f4939ee348f3eb6.scope - libcontainer container 0bab287b372525d78b81801fe29e3deeb35b485f512a32ee5f4939ee348f3eb6. Nov 8 00:06:53.383493 systemd[1]: Started cri-containerd-aeee22165d282061a111ef642fecf7eea6065d1963f9c62bafee8fedc3f7d0cb.scope - libcontainer container aeee22165d282061a111ef642fecf7eea6065d1963f9c62bafee8fedc3f7d0cb. Nov 8 00:06:53.405827 systemd[1]: Started cri-containerd-a44e8541452281d832ef5e516131d007af3375758b15466ffae0c90b6bc09a53.scope - libcontainer container a44e8541452281d832ef5e516131d007af3375758b15466ffae0c90b6bc09a53. Nov 8 00:06:53.461236 containerd[1483]: time="2025-11-08T00:06:53.461059107Z" level=info msg="StartContainer for \"0bab287b372525d78b81801fe29e3deeb35b485f512a32ee5f4939ee348f3eb6\" returns successfully" Nov 8 00:06:53.471565 containerd[1483]: time="2025-11-08T00:06:53.471179996Z" level=info msg="StartContainer for \"aeee22165d282061a111ef642fecf7eea6065d1963f9c62bafee8fedc3f7d0cb\" returns successfully" Nov 8 00:06:53.486963 containerd[1483]: time="2025-11-08T00:06:53.486523931Z" level=info msg="StartContainer for \"a44e8541452281d832ef5e516131d007af3375758b15466ffae0c90b6bc09a53\" returns successfully" Nov 8 00:06:55.367211 containerd[1483]: time="2025-11-08T00:06:55.366399924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:55.674372 systemd[1]: run-containerd-runc-k8s.io-8735be735467d8680ad0881b73f0a2cac488a91b128edc37521280af3bf43f24-runc.6jOESJ.mount: Deactivated successfully. Nov 8 00:06:55.715314 containerd[1483]: time="2025-11-08T00:06:55.715228669Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:55.718249 containerd[1483]: time="2025-11-08T00:06:55.718187858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:55.718659 containerd[1483]: time="2025-11-08T00:06:55.718262938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:55.718771 kubelet[2577]: E1108 00:06:55.718645 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:55.718771 kubelet[2577]: E1108 00:06:55.718691 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:55.719278 kubelet[2577]: E1108 00:06:55.718889 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzlmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bbbbfdffc-b22vr_calico-apiserver(36248f5d-e7be-4c9e-8bf1-2e53872f633b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:55.724253 kubelet[2577]: E1108 00:06:55.724170 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bbbbfdffc-b22vr" podUID="36248f5d-e7be-4c9e-8bf1-2e53872f633b" Nov 8 00:06:56.365296 containerd[1483]: time="2025-11-08T00:06:56.365259098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:56.705338 containerd[1483]: time="2025-11-08T00:06:56.704993266Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:56.707485 containerd[1483]: time="2025-11-08T00:06:56.706863845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:56.707608 kubelet[2577]: E1108 00:06:56.707195 2577 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:56.707608 kubelet[2577]: E1108 00:06:56.707247 2577 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:56.707608 kubelet[2577]: E1108 00:06:56.707393 2577 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgh2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c87cb4cfb-m9pm4_calico-system(d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:56.708034 containerd[1483]: time="2025-11-08T00:06:56.707005926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:56.708675 kubelet[2577]: E1108 00:06:56.708622 2577 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c87cb4cfb-m9pm4" podUID="d7c8d02f-ab3d-4412-bfb2-5d9f4d613dd9"