Nov 5 23:39:09.809454 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 5 23:39:09.809474 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Nov 5 22:12:41 -00 2025 Nov 5 23:39:09.809484 kernel: KASLR enabled Nov 5 23:39:09.809489 kernel: efi: EFI v2.7 by EDK II Nov 5 23:39:09.809495 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 5 23:39:09.809501 kernel: random: crng init done Nov 5 23:39:09.809507 kernel: secureboot: Secure boot disabled Nov 5 23:39:09.809513 kernel: ACPI: Early table checksum verification disabled Nov 5 23:39:09.809519 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 5 23:39:09.809526 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 5 23:39:09.809532 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809537 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809543 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809548 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809555 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809563 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809569 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809575 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809581 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:39:09.809587 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 5 23:39:09.809593 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 23:39:09.809599 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 23:39:09.809605 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 5 23:39:09.809611 kernel: Zone ranges: Nov 5 23:39:09.809625 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 23:39:09.809634 kernel: DMA32 empty Nov 5 23:39:09.809640 kernel: Normal empty Nov 5 23:39:09.809646 kernel: Device empty Nov 5 23:39:09.809651 kernel: Movable zone start for each node Nov 5 23:39:09.809657 kernel: Early memory node ranges Nov 5 23:39:09.809663 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 5 23:39:09.809669 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 5 23:39:09.809675 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 5 23:39:09.809681 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 5 23:39:09.809687 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 5 23:39:09.809693 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 5 23:39:09.809699 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 5 23:39:09.809707 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 5 23:39:09.809712 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 5 23:39:09.809718 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 5 23:39:09.809727 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 5 23:39:09.809733 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 5 23:39:09.809740 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 5 23:39:09.809748 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 23:39:09.809754 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 5 23:39:09.809761 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 5 23:39:09.809767 kernel: psci: probing for conduit method from ACPI. Nov 5 23:39:09.809773 kernel: psci: PSCIv1.1 detected in firmware. Nov 5 23:39:09.809780 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 23:39:09.809786 kernel: psci: Trusted OS migration not required Nov 5 23:39:09.809793 kernel: psci: SMC Calling Convention v1.1 Nov 5 23:39:09.809800 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 5 23:39:09.809807 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 23:39:09.809815 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 23:39:09.809821 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 5 23:39:09.809827 kernel: Detected PIPT I-cache on CPU0 Nov 5 23:39:09.809834 kernel: CPU features: detected: GIC system register CPU interface Nov 5 23:39:09.809840 kernel: CPU features: detected: Spectre-v4 Nov 5 23:39:09.809846 kernel: CPU features: detected: Spectre-BHB Nov 5 23:39:09.809853 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 23:39:09.809859 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 23:39:09.809865 kernel: CPU features: detected: ARM erratum 1418040 Nov 5 23:39:09.809872 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 23:39:09.809878 kernel: alternatives: applying boot alternatives Nov 5 23:39:09.809885 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:39:09.809893 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 23:39:09.809900 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 23:39:09.809907 kernel: Fallback order for Node 0: 0 Nov 5 23:39:09.809913 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 5 23:39:09.809919 kernel: Policy zone: DMA Nov 5 23:39:09.809925 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 23:39:09.809932 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 5 23:39:09.809939 kernel: software IO TLB: area num 4. Nov 5 23:39:09.809945 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 5 23:39:09.809952 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 5 23:39:09.809958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 23:39:09.809968 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 23:39:09.809975 kernel: rcu: RCU event tracing is enabled. Nov 5 23:39:09.809982 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 23:39:09.809989 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 23:39:09.809995 kernel: Tracing variant of Tasks RCU enabled. Nov 5 23:39:09.810002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 23:39:09.810008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 23:39:09.810015 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 23:39:09.810021 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 23:39:09.810028 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 23:39:09.810035 kernel: GICv3: 256 SPIs implemented Nov 5 23:39:09.810042 kernel: GICv3: 0 Extended SPIs implemented Nov 5 23:39:09.810049 kernel: Root IRQ handler: gic_handle_irq Nov 5 23:39:09.810055 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 5 23:39:09.810061 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 23:39:09.810067 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 5 23:39:09.810073 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 5 23:39:09.810080 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 5 23:39:09.810086 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 5 23:39:09.810093 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 5 23:39:09.810099 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 5 23:39:09.810105 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 23:39:09.810112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:39:09.810119 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 5 23:39:09.810126 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 5 23:39:09.810132 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 5 23:39:09.810138 kernel: arm-pv: using stolen time PV Nov 5 23:39:09.810145 kernel: Console: colour dummy device 80x25 Nov 5 23:39:09.810152 kernel: ACPI: Core revision 20240827 Nov 5 23:39:09.810158 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 5 23:39:09.810165 kernel: pid_max: default: 32768 minimum: 301 Nov 5 23:39:09.810172 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 23:39:09.810178 kernel: landlock: Up and running. Nov 5 23:39:09.810186 kernel: SELinux: Initializing. Nov 5 23:39:09.810193 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:39:09.810200 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:39:09.810207 kernel: rcu: Hierarchical SRCU implementation. Nov 5 23:39:09.810214 kernel: rcu: Max phase no-delay instances is 400. Nov 5 23:39:09.810221 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 23:39:09.810227 kernel: Remapping and enabling EFI services. Nov 5 23:39:09.810248 kernel: smp: Bringing up secondary CPUs ... Nov 5 23:39:09.810255 kernel: Detected PIPT I-cache on CPU1 Nov 5 23:39:09.810268 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 5 23:39:09.810276 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 5 23:39:09.810283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:39:09.810291 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 5 23:39:09.810298 kernel: Detected PIPT I-cache on CPU2 Nov 5 23:39:09.810305 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 5 23:39:09.810312 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 5 23:39:09.810319 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:39:09.810327 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 5 23:39:09.810334 kernel: Detected PIPT I-cache on CPU3 Nov 5 23:39:09.810341 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 5 23:39:09.810348 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 5 23:39:09.810355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:39:09.810362 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 5 23:39:09.810369 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 23:39:09.810376 kernel: SMP: Total of 4 processors activated. Nov 5 23:39:09.810383 kernel: CPU: All CPU(s) started at EL1 Nov 5 23:39:09.810399 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 23:39:09.810407 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 23:39:09.810414 kernel: CPU features: detected: Common not Private translations Nov 5 23:39:09.810421 kernel: CPU features: detected: CRC32 instructions Nov 5 23:39:09.810428 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 5 23:39:09.810435 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 23:39:09.810442 kernel: CPU features: detected: LSE atomic instructions Nov 5 23:39:09.810449 kernel: CPU features: detected: Privileged Access Never Nov 5 23:39:09.810455 kernel: CPU features: detected: RAS Extension Support Nov 5 23:39:09.810464 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 23:39:09.810471 kernel: alternatives: applying system-wide alternatives Nov 5 23:39:09.810477 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 5 23:39:09.810485 kernel: Memory: 2424416K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125536K reserved, 16384K cma-reserved) Nov 5 23:39:09.810491 kernel: devtmpfs: initialized Nov 5 23:39:09.810498 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 23:39:09.810505 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 23:39:09.810512 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 23:39:09.810519 kernel: 0 pages in range for non-PLT usage Nov 5 23:39:09.810527 kernel: 508560 pages in range for PLT usage Nov 5 23:39:09.810534 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 23:39:09.810542 kernel: SMBIOS 3.0.0 present. Nov 5 23:39:09.810549 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 5 23:39:09.810556 kernel: DMI: Memory slots populated: 1/1 Nov 5 23:39:09.810563 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 23:39:09.810570 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 23:39:09.810577 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 23:39:09.810584 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 23:39:09.810592 kernel: audit: initializing netlink subsys (disabled) Nov 5 23:39:09.810599 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Nov 5 23:39:09.810606 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 23:39:09.810613 kernel: cpuidle: using governor menu Nov 5 23:39:09.810625 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 23:39:09.810632 kernel: ASID allocator initialised with 32768 entries Nov 5 23:39:09.810639 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 23:39:09.810646 kernel: Serial: AMBA PL011 UART driver Nov 5 23:39:09.810653 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 23:39:09.810662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 23:39:09.810669 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 23:39:09.810676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 23:39:09.810688 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 23:39:09.810697 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 23:39:09.810704 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 23:39:09.810711 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 23:39:09.810718 kernel: ACPI: Added _OSI(Module Device) Nov 5 23:39:09.810725 kernel: ACPI: Added _OSI(Processor Device) Nov 5 23:39:09.810734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 23:39:09.810743 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 23:39:09.810752 kernel: ACPI: Interpreter enabled Nov 5 23:39:09.810759 kernel: ACPI: Using GIC for interrupt routing Nov 5 23:39:09.810766 kernel: ACPI: MCFG table detected, 1 entries Nov 5 23:39:09.810774 kernel: ACPI: CPU0 has been hot-added Nov 5 23:39:09.810781 kernel: ACPI: CPU1 has been hot-added Nov 5 23:39:09.810789 kernel: ACPI: CPU2 has been hot-added Nov 5 23:39:09.810796 kernel: ACPI: CPU3 has been hot-added Nov 5 23:39:09.810803 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 5 23:39:09.810817 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 23:39:09.810824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 23:39:09.810993 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 23:39:09.811068 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 23:39:09.811140 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 23:39:09.811214 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 5 23:39:09.811271 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 5 23:39:09.811283 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 5 23:39:09.811290 kernel: PCI host bridge to bus 0000:00 Nov 5 23:39:09.811363 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 5 23:39:09.811434 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 23:39:09.811488 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 5 23:39:09.811538 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 23:39:09.811614 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 5 23:39:09.811701 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 23:39:09.811761 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 5 23:39:09.811819 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 5 23:39:09.811877 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 5 23:39:09.811933 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 5 23:39:09.812006 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 5 23:39:09.812065 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 5 23:39:09.812116 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 5 23:39:09.812166 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 23:39:09.812218 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 5 23:39:09.812227 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 23:39:09.812234 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 23:39:09.812241 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 23:39:09.812248 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 23:39:09.812256 kernel: iommu: Default domain type: Translated Nov 5 23:39:09.812263 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 23:39:09.812270 kernel: efivars: Registered efivars operations Nov 5 23:39:09.812277 kernel: vgaarb: loaded Nov 5 23:39:09.812284 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 23:39:09.812291 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 23:39:09.812298 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 23:39:09.812305 kernel: pnp: PnP ACPI init Nov 5 23:39:09.812373 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 5 23:39:09.812385 kernel: pnp: PnP ACPI: found 1 devices Nov 5 23:39:09.812423 kernel: NET: Registered PF_INET protocol family Nov 5 23:39:09.812431 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 23:39:09.812438 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 23:39:09.812445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 23:39:09.812452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 23:39:09.812459 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 23:39:09.812466 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 23:39:09.812473 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:39:09.812481 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:39:09.812488 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 23:39:09.812495 kernel: PCI: CLS 0 bytes, default 64 Nov 5 23:39:09.812502 kernel: kvm [1]: HYP mode not available Nov 5 23:39:09.812509 kernel: Initialise system trusted keyrings Nov 5 23:39:09.812516 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 23:39:09.812523 kernel: Key type asymmetric registered Nov 5 23:39:09.812531 kernel: Asymmetric key parser 'x509' registered Nov 5 23:39:09.812538 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 23:39:09.812546 kernel: io scheduler mq-deadline registered Nov 5 23:39:09.812553 kernel: io scheduler kyber registered Nov 5 23:39:09.812561 kernel: io scheduler bfq registered Nov 5 23:39:09.812569 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 23:39:09.812576 kernel: ACPI: button: Power Button [PWRB] Nov 5 23:39:09.812584 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 23:39:09.812656 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 5 23:39:09.812666 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 23:39:09.812673 kernel: thunder_xcv, ver 1.0 Nov 5 23:39:09.812682 kernel: thunder_bgx, ver 1.0 Nov 5 23:39:09.812689 kernel: nicpf, ver 1.0 Nov 5 23:39:09.812696 kernel: nicvf, ver 1.0 Nov 5 23:39:09.812763 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 23:39:09.812819 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T23:39:09 UTC (1762385949) Nov 5 23:39:09.812828 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 23:39:09.812835 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 23:39:09.812842 kernel: watchdog: NMI not fully supported Nov 5 23:39:09.812851 kernel: watchdog: Hard watchdog permanently disabled Nov 5 23:39:09.812858 kernel: NET: Registered PF_INET6 protocol family Nov 5 23:39:09.812865 kernel: Segment Routing with IPv6 Nov 5 23:39:09.812872 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 23:39:09.812879 kernel: NET: Registered PF_PACKET protocol family Nov 5 23:39:09.812886 kernel: Key type dns_resolver registered Nov 5 23:39:09.812893 kernel: registered taskstats version 1 Nov 5 23:39:09.812900 kernel: Loading compiled-in X.509 certificates Nov 5 23:39:09.812907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9d5732f5af196e4cfd06fc38e62e061c2a702dfd' Nov 5 23:39:09.812915 kernel: Demotion targets for Node 0: null Nov 5 23:39:09.812922 kernel: Key type .fscrypt registered Nov 5 23:39:09.812929 kernel: Key type fscrypt-provisioning registered Nov 5 23:39:09.812936 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 23:39:09.812942 kernel: ima: Allocated hash algorithm: sha1 Nov 5 23:39:09.812950 kernel: ima: No architecture policies found Nov 5 23:39:09.812956 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 23:39:09.812963 kernel: clk: Disabling unused clocks Nov 5 23:39:09.812970 kernel: PM: genpd: Disabling unused power domains Nov 5 23:39:09.812978 kernel: Warning: unable to open an initial console. Nov 5 23:39:09.812985 kernel: Freeing unused kernel memory: 38976K Nov 5 23:39:09.812997 kernel: Run /init as init process Nov 5 23:39:09.813004 kernel: with arguments: Nov 5 23:39:09.813011 kernel: /init Nov 5 23:39:09.813018 kernel: with environment: Nov 5 23:39:09.813024 kernel: HOME=/ Nov 5 23:39:09.813031 kernel: TERM=linux Nov 5 23:39:09.813039 systemd[1]: Successfully made /usr/ read-only. Nov 5 23:39:09.813051 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:39:09.813059 systemd[1]: Detected virtualization kvm. Nov 5 23:39:09.813067 systemd[1]: Detected architecture arm64. Nov 5 23:39:09.813074 systemd[1]: Running in initrd. Nov 5 23:39:09.813081 systemd[1]: No hostname configured, using default hostname. Nov 5 23:39:09.813089 systemd[1]: Hostname set to . Nov 5 23:39:09.813097 systemd[1]: Initializing machine ID from VM UUID. Nov 5 23:39:09.813106 systemd[1]: Queued start job for default target initrd.target. Nov 5 23:39:09.813114 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:39:09.813122 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:39:09.813130 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 23:39:09.813137 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:39:09.813145 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 23:39:09.813154 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 23:39:09.813164 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 5 23:39:09.813172 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 5 23:39:09.813180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:39:09.813188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:39:09.813195 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:39:09.813203 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:39:09.813210 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:39:09.813218 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:39:09.813226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:39:09.813234 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:39:09.813241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 23:39:09.813249 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 23:39:09.813256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:39:09.813264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:39:09.813271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:39:09.813279 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:39:09.813286 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 23:39:09.813295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:39:09.813303 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 23:39:09.813311 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 23:39:09.813318 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 23:39:09.813325 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:39:09.813333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:39:09.813340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:39:09.813348 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 23:39:09.813358 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:39:09.813365 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 23:39:09.813373 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 23:39:09.813408 systemd-journald[244]: Collecting audit messages is disabled. Nov 5 23:39:09.813432 systemd-journald[244]: Journal started Nov 5 23:39:09.813453 systemd-journald[244]: Runtime Journal (/run/log/journal/e9da9a2a7b70471b9db1e200b41114fe) is 6M, max 48.5M, 42.4M free. Nov 5 23:39:09.811092 systemd-modules-load[246]: Inserted module 'overlay' Nov 5 23:39:09.826100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:39:09.829993 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:39:09.831217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:39:09.835212 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 23:39:09.835236 kernel: Bridge firewalling registered Nov 5 23:39:09.834886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 23:39:09.836810 systemd-modules-load[246]: Inserted module 'br_netfilter' Nov 5 23:39:09.837475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:39:09.839933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:39:09.855625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:39:09.858636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:39:09.862932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:39:09.863987 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 23:39:09.867052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:39:09.871284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:39:09.874021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:39:09.876595 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 23:39:09.879342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:39:09.903457 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:39:09.920811 systemd-resolved[292]: Positive Trust Anchors: Nov 5 23:39:09.920829 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:39:09.920860 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:39:09.926982 systemd-resolved[292]: Defaulting to hostname 'linux'. Nov 5 23:39:09.928310 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:39:09.931544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:39:09.981420 kernel: SCSI subsystem initialized Nov 5 23:39:09.986408 kernel: Loading iSCSI transport class v2.0-870. Nov 5 23:39:09.994434 kernel: iscsi: registered transport (tcp) Nov 5 23:39:10.007434 kernel: iscsi: registered transport (qla4xxx) Nov 5 23:39:10.007481 kernel: QLogic iSCSI HBA Driver Nov 5 23:39:10.024841 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:39:10.049108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:39:10.052340 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:39:10.098291 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 23:39:10.100847 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 23:39:10.165433 kernel: raid6: neonx8 gen() 15766 MB/s Nov 5 23:39:10.182415 kernel: raid6: neonx4 gen() 15763 MB/s Nov 5 23:39:10.199414 kernel: raid6: neonx2 gen() 13196 MB/s Nov 5 23:39:10.216413 kernel: raid6: neonx1 gen() 10190 MB/s Nov 5 23:39:10.233412 kernel: raid6: int64x8 gen() 6881 MB/s Nov 5 23:39:10.250410 kernel: raid6: int64x4 gen() 7344 MB/s Nov 5 23:39:10.267413 kernel: raid6: int64x2 gen() 6089 MB/s Nov 5 23:39:10.284705 kernel: raid6: int64x1 gen() 5044 MB/s Nov 5 23:39:10.284723 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Nov 5 23:39:10.302594 kernel: raid6: .... xor() 12066 MB/s, rmw enabled Nov 5 23:39:10.302614 kernel: raid6: using neon recovery algorithm Nov 5 23:39:10.308954 kernel: xor: measuring software checksum speed Nov 5 23:39:10.308993 kernel: 8regs : 20078 MB/sec Nov 5 23:39:10.309002 kernel: 32regs : 21658 MB/sec Nov 5 23:39:10.309627 kernel: arm64_neon : 27974 MB/sec Nov 5 23:39:10.309641 kernel: xor: using function: arm64_neon (27974 MB/sec) Nov 5 23:39:10.363424 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 23:39:10.369696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:39:10.372591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:39:10.397643 systemd-udevd[501]: Using default interface naming scheme 'v255'. Nov 5 23:39:10.401636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:39:10.404507 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 23:39:10.430927 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Nov 5 23:39:10.453687 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:39:10.456456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:39:10.511539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:39:10.514937 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 23:39:10.563410 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 5 23:39:10.569245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:39:10.574096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:39:10.578365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:39:10.580950 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 5 23:39:10.580501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:39:10.587735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 23:39:10.587775 kernel: GPT:9289727 != 19775487 Nov 5 23:39:10.587792 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 23:39:10.587801 kernel: GPT:9289727 != 19775487 Nov 5 23:39:10.588852 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 23:39:10.588871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 23:39:10.614564 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 23:39:10.616230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:39:10.628136 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 23:39:10.637100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 23:39:10.645354 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 23:39:10.651901 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 5 23:39:10.653239 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 23:39:10.656789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:39:10.659349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:39:10.661695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:39:10.664711 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 23:39:10.666732 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 23:39:10.690081 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:39:10.945459 disk-uuid[593]: Primary Header is updated. Nov 5 23:39:10.945459 disk-uuid[593]: Secondary Entries is updated. Nov 5 23:39:10.945459 disk-uuid[593]: Secondary Header is updated. Nov 5 23:39:10.949291 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 23:39:11.956411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 23:39:11.957318 disk-uuid[601]: The operation has completed successfully. Nov 5 23:39:11.990617 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 23:39:11.990747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 23:39:12.009031 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 5 23:39:12.030599 sh[613]: Success Nov 5 23:39:12.043492 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 23:39:12.043550 kernel: device-mapper: uevent: version 1.0.3 Nov 5 23:39:12.044887 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 23:39:12.052412 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 23:39:12.079862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 5 23:39:12.082871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 5 23:39:12.097880 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 5 23:39:12.106378 kernel: BTRFS: device fsid 223300c7-37a4-4131-896a-4d331c3aa134 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (625) Nov 5 23:39:12.106431 kernel: BTRFS info (device dm-0): first mount of filesystem 223300c7-37a4-4131-896a-4d331c3aa134 Nov 5 23:39:12.106452 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:39:12.113417 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 23:39:12.113484 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 23:39:12.114720 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 5 23:39:12.116165 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:39:12.117761 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 23:39:12.118570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 23:39:12.120454 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 23:39:12.150413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (663) Nov 5 23:39:12.153688 kernel: BTRFS info (device vda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:39:12.153747 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:39:12.156821 kernel: BTRFS info (device vda6): turning on async discard Nov 5 23:39:12.156887 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 23:39:12.162416 kernel: BTRFS info (device vda6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:39:12.163022 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 23:39:12.166303 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 23:39:12.235439 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:39:12.239200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:39:12.276817 systemd-networkd[803]: lo: Link UP Nov 5 23:39:12.276829 systemd-networkd[803]: lo: Gained carrier Nov 5 23:39:12.277698 systemd-networkd[803]: Enumeration completed Nov 5 23:39:12.280813 ignition[713]: Ignition 2.22.0 Nov 5 23:39:12.277833 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:39:12.280821 ignition[713]: Stage: fetch-offline Nov 5 23:39:12.278155 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:39:12.280855 ignition[713]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:39:12.278159 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:39:12.280862 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:39:12.279152 systemd-networkd[803]: eth0: Link UP Nov 5 23:39:12.280969 ignition[713]: parsed url from cmdline: "" Nov 5 23:39:12.279239 systemd-networkd[803]: eth0: Gained carrier Nov 5 23:39:12.280974 ignition[713]: no config URL provided Nov 5 23:39:12.279248 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:39:12.280979 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 23:39:12.279486 systemd[1]: Reached target network.target - Network. Nov 5 23:39:12.280985 ignition[713]: no config at "/usr/lib/ignition/user.ign" Nov 5 23:39:12.281006 ignition[713]: op(1): [started] loading QEMU firmware config module Nov 5 23:39:12.281018 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 23:39:12.288276 ignition[713]: op(1): [finished] loading QEMU firmware config module Nov 5 23:39:12.309492 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 23:39:12.353708 ignition[713]: parsing config with SHA512: 6f97d02a1d73df88f17892a194685a193949535505c31a33797801510e47e8e41765d0ecd7e67ebdf401e5b96c8cca9f5c25e70bad81eca1a529ca5a03f0ad15 Nov 5 23:39:12.358317 unknown[713]: fetched base config from "system" Nov 5 23:39:12.358334 unknown[713]: fetched user config from "qemu" Nov 5 23:39:12.358723 ignition[713]: fetch-offline: fetch-offline passed Nov 5 23:39:12.358780 ignition[713]: Ignition finished successfully Nov 5 23:39:12.361429 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:39:12.364142 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 23:39:12.365337 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 23:39:12.405316 ignition[812]: Ignition 2.22.0 Nov 5 23:39:12.405337 ignition[812]: Stage: kargs Nov 5 23:39:12.405705 ignition[812]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:39:12.405715 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:39:12.406588 ignition[812]: kargs: kargs passed Nov 5 23:39:12.410108 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 23:39:12.406653 ignition[812]: Ignition finished successfully Nov 5 23:39:12.412549 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 23:39:12.450764 ignition[821]: Ignition 2.22.0 Nov 5 23:39:12.450780 ignition[821]: Stage: disks Nov 5 23:39:12.450940 ignition[821]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:39:12.454351 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 23:39:12.450948 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:39:12.455831 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 23:39:12.451856 ignition[821]: disks: disks passed Nov 5 23:39:12.457185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 23:39:12.451922 ignition[821]: Ignition finished successfully Nov 5 23:39:12.458644 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:39:12.460705 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:39:12.462900 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:39:12.466164 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 23:39:12.506487 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 5 23:39:12.512060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 23:39:12.514711 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 23:39:12.584423 kernel: EXT4-fs (vda9): mounted filesystem de3d89fd-ab21-4d05-b3c1-f0d3e7ce9725 r/w with ordered data mode. Quota mode: none. Nov 5 23:39:12.585386 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 23:39:12.586858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 23:39:12.589753 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:39:12.591995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 23:39:12.593169 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 23:39:12.593227 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 23:39:12.593255 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:39:12.604231 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 23:39:12.606686 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 23:39:12.614216 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Nov 5 23:39:12.614315 kernel: BTRFS info (device vda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:39:12.614330 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:39:12.618891 kernel: BTRFS info (device vda6): turning on async discard Nov 5 23:39:12.618927 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 23:39:12.620471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:39:12.646607 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 23:39:12.650974 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Nov 5 23:39:12.655792 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 23:39:12.659835 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 23:39:12.741086 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 23:39:12.743490 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 23:39:12.745352 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 23:39:12.768428 kernel: BTRFS info (device vda6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:39:12.785589 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 23:39:12.804805 ignition[954]: INFO : Ignition 2.22.0 Nov 5 23:39:12.806513 ignition[954]: INFO : Stage: mount Nov 5 23:39:12.806513 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:39:12.806513 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:39:12.806513 ignition[954]: INFO : mount: mount passed Nov 5 23:39:12.812072 ignition[954]: INFO : Ignition finished successfully Nov 5 23:39:12.809442 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 23:39:12.812863 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 23:39:13.104158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 23:39:13.105895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:39:13.136416 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Nov 5 23:39:13.139300 kernel: BTRFS info (device vda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:39:13.139349 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:39:13.142444 kernel: BTRFS info (device vda6): turning on async discard Nov 5 23:39:13.142482 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 23:39:13.144047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:39:13.182408 ignition[983]: INFO : Ignition 2.22.0 Nov 5 23:39:13.182408 ignition[983]: INFO : Stage: files Nov 5 23:39:13.184441 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:39:13.184441 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:39:13.184441 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Nov 5 23:39:13.187985 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 23:39:13.187985 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 23:39:13.191074 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 23:39:13.191074 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 23:39:13.191074 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 23:39:13.191074 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:39:13.191074 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 23:39:13.188648 unknown[983]: wrote ssh authorized keys file for user: core Nov 5 23:39:13.274157 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 23:39:13.602303 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:39:13.604450 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:39:13.606561 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:39:13.625255 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:39:13.627395 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:39:13.627395 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:39:13.645274 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:39:13.645274 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:39:13.650610 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 5 23:39:13.670550 systemd-networkd[803]: eth0: Gained IPv6LL Nov 5 23:39:13.973472 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 23:39:14.282490 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:39:14.282490 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 23:39:14.286796 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 23:39:14.303071 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 23:39:14.306556 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 23:39:14.309494 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 23:39:14.309494 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 23:39:14.309494 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 23:39:14.309494 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:39:14.309494 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:39:14.309494 ignition[983]: INFO : files: files passed Nov 5 23:39:14.309494 ignition[983]: INFO : Ignition finished successfully Nov 5 23:39:14.309893 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 23:39:14.313274 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 23:39:14.315426 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 23:39:14.329649 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 23:39:14.329798 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 23:39:14.333827 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 23:39:14.335374 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:39:14.335374 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:39:14.338666 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:39:14.338456 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:39:14.340287 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 23:39:14.343539 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 23:39:14.391519 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 23:39:14.391674 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 23:39:14.394201 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 23:39:14.396598 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 23:39:14.398578 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 23:39:14.399509 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 23:39:14.430878 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:39:14.433865 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 23:39:14.452109 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:39:14.453692 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:39:14.455926 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 23:39:14.457826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 23:39:14.457977 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:39:14.460598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 23:39:14.462752 systemd[1]: Stopped target basic.target - Basic System. Nov 5 23:39:14.464487 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 23:39:14.466424 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:39:14.468580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 23:39:14.470620 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:39:14.472674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 23:39:14.474596 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:39:14.476818 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 23:39:14.478994 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 23:39:14.480896 systemd[1]: Stopped target swap.target - Swaps. Nov 5 23:39:14.482517 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 23:39:14.482664 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:39:14.485224 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:39:14.487417 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:39:14.489627 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 23:39:14.489752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:39:14.491994 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 23:39:14.492131 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 23:39:14.495492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 23:39:14.495825 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:39:14.497821 systemd[1]: Stopped target paths.target - Path Units. Nov 5 23:39:14.499484 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 23:39:14.504475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:39:14.505868 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 23:39:14.508111 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 23:39:14.509951 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 23:39:14.510046 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:39:14.511692 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 23:39:14.511777 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:39:14.513467 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 23:39:14.513599 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:39:14.515663 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 23:39:14.515781 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 23:39:14.518440 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 23:39:14.520516 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 23:39:14.520689 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:39:14.532055 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 23:39:14.533107 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 23:39:14.533277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:39:14.535482 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 23:39:14.535608 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:39:14.542012 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 23:39:14.548580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 23:39:14.551272 ignition[1039]: INFO : Ignition 2.22.0 Nov 5 23:39:14.551272 ignition[1039]: INFO : Stage: umount Nov 5 23:39:14.551272 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:39:14.551272 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:39:14.551272 ignition[1039]: INFO : umount: umount passed Nov 5 23:39:14.551272 ignition[1039]: INFO : Ignition finished successfully Nov 5 23:39:14.552901 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 23:39:14.553047 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 23:39:14.555796 systemd[1]: Stopped target network.target - Network. Nov 5 23:39:14.557774 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 23:39:14.557849 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 23:39:14.559509 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 23:39:14.559567 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 23:39:14.561475 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 23:39:14.561536 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 23:39:14.564059 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 23:39:14.564108 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 23:39:14.566356 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 23:39:14.568151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 23:39:14.570970 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 23:39:14.575228 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 23:39:14.575377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 23:39:14.578976 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 5 23:39:14.579261 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 23:39:14.579301 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:39:14.584945 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:39:14.585195 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 23:39:14.585329 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 23:39:14.590280 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 5 23:39:14.590776 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 23:39:14.592101 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 23:39:14.592147 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:39:14.595232 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 23:39:14.596685 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 23:39:14.596769 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:39:14.599208 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 23:39:14.599263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:39:14.602337 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 23:39:14.602399 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 23:39:14.604937 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:39:14.610289 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 5 23:39:14.627416 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 23:39:14.627600 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:39:14.629523 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 23:39:14.629644 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 23:39:14.631763 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 23:39:14.631839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 23:39:14.633716 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 23:39:14.633755 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:39:14.635705 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 23:39:14.635767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:39:14.638548 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 23:39:14.638624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 23:39:14.641601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 23:39:14.641681 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:39:14.644869 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 23:39:14.644932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 23:39:14.648064 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 23:39:14.650326 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 23:39:14.650422 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:39:14.653889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 23:39:14.654021 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:39:14.657219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:39:14.657284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:39:14.661183 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 23:39:14.663523 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 23:39:14.669451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 23:39:14.669575 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 23:39:14.671916 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 23:39:14.674668 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 23:39:14.706453 systemd[1]: Switching root. Nov 5 23:39:14.739175 systemd-journald[244]: Journal stopped Nov 5 23:39:15.577191 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Nov 5 23:39:15.577245 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 23:39:15.577256 kernel: SELinux: policy capability open_perms=1 Nov 5 23:39:15.577269 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 23:39:15.577280 kernel: SELinux: policy capability always_check_network=0 Nov 5 23:39:15.577290 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 23:39:15.577300 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 23:39:15.577310 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 23:39:15.577319 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 23:39:15.577328 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 23:39:15.577337 kernel: audit: type=1403 audit(1762385954.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 23:39:15.577353 systemd[1]: Successfully loaded SELinux policy in 57.919ms. Nov 5 23:39:15.577375 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.955ms. Nov 5 23:39:15.577386 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:39:15.577492 systemd[1]: Detected virtualization kvm. Nov 5 23:39:15.577504 systemd[1]: Detected architecture arm64. Nov 5 23:39:15.577514 systemd[1]: Detected first boot. Nov 5 23:39:15.577524 systemd[1]: Initializing machine ID from VM UUID. Nov 5 23:39:15.577534 zram_generator::config[1085]: No configuration found. Nov 5 23:39:15.577545 kernel: NET: Registered PF_VSOCK protocol family Nov 5 23:39:15.577557 systemd[1]: Populated /etc with preset unit settings. Nov 5 23:39:15.577568 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 5 23:39:15.577583 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 23:39:15.577594 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 23:39:15.577605 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 23:39:15.577617 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 23:39:15.577628 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 23:39:15.577647 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 23:39:15.577659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 23:39:15.577672 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 23:39:15.577683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 23:39:15.577693 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 23:39:15.577703 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 23:39:15.577713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:39:15.577724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:39:15.577734 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 23:39:15.577745 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 23:39:15.577755 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 23:39:15.577767 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:39:15.577778 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 23:39:15.577788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:39:15.577880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:39:15.577895 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 23:39:15.577905 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 23:39:15.577946 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 23:39:15.577965 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 23:39:15.577976 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:39:15.577987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:39:15.577997 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:39:15.578007 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:39:15.578018 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 23:39:15.578028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 23:39:15.578038 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 23:39:15.578048 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:39:15.578061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:39:15.578071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:39:15.578081 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 23:39:15.578092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 23:39:15.578102 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 23:39:15.578112 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 23:39:15.578122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 23:39:15.578132 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 23:39:15.578143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 23:39:15.578155 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 23:39:15.578165 systemd[1]: Reached target machines.target - Containers. Nov 5 23:39:15.578175 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 23:39:15.578185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:39:15.578195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:39:15.578207 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 23:39:15.578217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:39:15.578227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:39:15.578242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:39:15.578253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 23:39:15.578263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:39:15.578274 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 23:39:15.578285 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 23:39:15.578295 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 23:39:15.578305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 23:39:15.578314 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 23:39:15.578324 kernel: fuse: init (API version 7.41) Nov 5 23:39:15.578335 kernel: loop: module loaded Nov 5 23:39:15.578345 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:39:15.578355 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:39:15.578365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:39:15.578376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:39:15.578386 kernel: ACPI: bus type drm_connector registered Nov 5 23:39:15.578410 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 23:39:15.578424 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 23:39:15.578434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:39:15.578447 systemd[1]: verity-setup.service: Deactivated successfully. Nov 5 23:39:15.578457 systemd[1]: Stopped verity-setup.service. Nov 5 23:39:15.578469 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 23:39:15.578479 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 23:39:15.578489 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 23:39:15.578535 systemd-journald[1160]: Collecting audit messages is disabled. Nov 5 23:39:15.578615 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 23:39:15.578639 systemd-journald[1160]: Journal started Nov 5 23:39:15.578663 systemd-journald[1160]: Runtime Journal (/run/log/journal/e9da9a2a7b70471b9db1e200b41114fe) is 6M, max 48.5M, 42.4M free. Nov 5 23:39:15.306118 systemd[1]: Queued start job for default target multi-user.target. Nov 5 23:39:15.329685 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 23:39:15.330162 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 23:39:15.581593 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:39:15.582292 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 23:39:15.583733 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 23:39:15.585165 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 23:39:15.587480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:39:15.589151 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 23:39:15.590426 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 23:39:15.591976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:39:15.592155 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:39:15.593812 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:39:15.593979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:39:15.595468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:39:15.595662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:39:15.597229 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 23:39:15.597421 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 23:39:15.598877 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:39:15.599042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:39:15.600587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:39:15.602084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:39:15.603772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 23:39:15.605458 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 23:39:15.616428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:39:15.621955 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:39:15.624567 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 23:39:15.626789 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 23:39:15.628131 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 23:39:15.628171 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:39:15.630192 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 23:39:15.639382 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 23:39:15.640959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:39:15.642563 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 23:39:15.644963 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 23:39:15.646514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:39:15.647779 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 23:39:15.649092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:39:15.651367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:39:15.655679 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 23:39:15.657063 systemd-journald[1160]: Time spent on flushing to /var/log/journal/e9da9a2a7b70471b9db1e200b41114fe is 21.394ms for 883 entries. Nov 5 23:39:15.657063 systemd-journald[1160]: System Journal (/var/log/journal/e9da9a2a7b70471b9db1e200b41114fe) is 8M, max 195.6M, 187.6M free. Nov 5 23:39:15.689885 systemd-journald[1160]: Received client request to flush runtime journal. Nov 5 23:39:15.689938 kernel: loop0: detected capacity change from 0 to 100632 Nov 5 23:39:15.664660 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 23:39:15.667936 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 23:39:15.670235 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 23:39:15.672085 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 23:39:15.676800 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 23:39:15.679753 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 23:39:15.699452 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 23:39:15.702914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:39:15.707588 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 23:39:15.713086 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:39:15.713596 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 23:39:15.722585 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 23:39:15.736408 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Nov 5 23:39:15.736428 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Nov 5 23:39:15.741458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:39:15.745423 kernel: loop1: detected capacity change from 0 to 211168 Nov 5 23:39:15.774427 kernel: loop2: detected capacity change from 0 to 119368 Nov 5 23:39:15.801207 kernel: loop3: detected capacity change from 0 to 100632 Nov 5 23:39:15.808420 kernel: loop4: detected capacity change from 0 to 211168 Nov 5 23:39:15.817558 kernel: loop5: detected capacity change from 0 to 119368 Nov 5 23:39:15.822997 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 5 23:39:15.823469 (sd-merge)[1223]: Merged extensions into '/usr'. Nov 5 23:39:15.827991 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 23:39:15.828011 systemd[1]: Reloading... Nov 5 23:39:15.873806 zram_generator::config[1249]: No configuration found. Nov 5 23:39:15.951086 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 23:39:16.020526 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 23:39:16.020967 systemd[1]: Reloading finished in 192 ms. Nov 5 23:39:16.043434 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 23:39:16.045034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 23:39:16.063659 systemd[1]: Starting ensure-sysext.service... Nov 5 23:39:16.065684 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:39:16.075704 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Nov 5 23:39:16.075723 systemd[1]: Reloading... Nov 5 23:39:16.085087 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 23:39:16.085112 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 23:39:16.085358 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 23:39:16.086089 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 23:39:16.086812 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 23:39:16.087127 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Nov 5 23:39:16.087240 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Nov 5 23:39:16.090075 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:39:16.090087 systemd-tmpfiles[1284]: Skipping /boot Nov 5 23:39:16.096454 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:39:16.096568 systemd-tmpfiles[1284]: Skipping /boot Nov 5 23:39:16.125431 zram_generator::config[1311]: No configuration found. Nov 5 23:39:16.259083 systemd[1]: Reloading finished in 183 ms. Nov 5 23:39:16.284095 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 23:39:16.289883 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:39:16.301545 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:39:16.304237 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 23:39:16.317425 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 23:39:16.321146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:39:16.326581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:39:16.330603 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 23:39:16.339665 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 23:39:16.348624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 23:39:16.353722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:39:16.355015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:39:16.357603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:39:16.360527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:39:16.362578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:39:16.362720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:39:16.373772 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 23:39:16.377957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:39:16.378169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:39:16.378289 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Nov 5 23:39:16.380504 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 23:39:16.382619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:39:16.382808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:39:16.384801 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:39:16.384963 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:39:16.389050 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 23:39:16.394437 augenrules[1382]: No rules Nov 5 23:39:16.394649 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 23:39:16.396611 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:39:16.403594 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:39:16.405721 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:39:16.417283 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 23:39:16.425702 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:39:16.427154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:39:16.430301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:39:16.433677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:39:16.438587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:39:16.441872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:39:16.443246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:39:16.444941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:39:16.449626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:39:16.450746 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 23:39:16.453425 systemd[1]: Finished ensure-sysext.service. Nov 5 23:39:16.454677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:39:16.454882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:39:16.457081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:39:16.457255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:39:16.467280 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:39:16.470807 augenrules[1422]: /sbin/augenrules: No change Nov 5 23:39:16.472584 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 23:39:16.474127 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:39:16.474309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:39:16.476878 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:39:16.477043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:39:16.481780 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 23:39:16.485122 augenrules[1449]: No rules Nov 5 23:39:16.485993 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:39:16.486223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:39:16.489771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:39:16.523103 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 23:39:16.525803 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 23:39:16.560468 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 23:39:16.600423 systemd-resolved[1351]: Positive Trust Anchors: Nov 5 23:39:16.600519 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:39:16.600552 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:39:16.606037 systemd-networkd[1427]: lo: Link UP Nov 5 23:39:16.606047 systemd-networkd[1427]: lo: Gained carrier Nov 5 23:39:16.607516 systemd-networkd[1427]: Enumeration completed Nov 5 23:39:16.607761 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:39:16.608364 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:39:16.608483 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:39:16.609187 systemd-networkd[1427]: eth0: Link UP Nov 5 23:39:16.609410 systemd-networkd[1427]: eth0: Gained carrier Nov 5 23:39:16.609500 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:39:16.610711 systemd-resolved[1351]: Defaulting to hostname 'linux'. Nov 5 23:39:16.611864 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 23:39:16.614482 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 23:39:16.615867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:39:16.617963 systemd[1]: Reached target network.target - Network. Nov 5 23:39:16.619313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:39:16.621137 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 23:39:16.624796 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:39:16.626056 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 23:39:16.627472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 23:39:16.628940 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 23:39:16.628980 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 23:39:16.630495 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 23:39:16.630530 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:39:16.631527 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 23:39:16.632822 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 23:39:16.634179 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 23:39:16.636465 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:39:16.637910 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Nov 5 23:39:16.638370 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 23:39:16.641166 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 23:39:16.644131 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 23:39:16.645082 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 23:39:16.646435 systemd-timesyncd[1441]: Initial clock synchronization to Wed 2025-11-05 23:39:16.668971 UTC. Nov 5 23:39:16.647767 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 23:39:16.649198 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 23:39:16.660191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 23:39:16.661850 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 23:39:16.666449 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 23:39:16.668088 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 23:39:16.670146 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:39:16.671448 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:39:16.672739 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:39:16.672773 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:39:16.673921 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 23:39:16.677646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 23:39:16.681710 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 23:39:16.684413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 23:39:16.693770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 23:39:16.694981 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 23:39:16.696182 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 23:39:16.696716 jq[1492]: false Nov 5 23:39:16.698446 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 23:39:16.701669 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 23:39:16.706012 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 23:39:16.709841 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 23:39:16.712071 extend-filesystems[1493]: Found /dev/vda6 Nov 5 23:39:16.713546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 23:39:16.714095 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 23:39:16.715701 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 23:39:16.717235 extend-filesystems[1493]: Found /dev/vda9 Nov 5 23:39:16.717869 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 23:39:16.718679 extend-filesystems[1493]: Checking size of /dev/vda9 Nov 5 23:39:16.722419 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 23:39:16.724857 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 23:39:16.729090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 23:39:16.729678 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 23:39:16.731457 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 23:39:16.736823 jq[1510]: true Nov 5 23:39:16.737061 extend-filesystems[1493]: Resized partition /dev/vda9 Nov 5 23:39:16.734349 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 23:39:16.734564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 23:39:16.744424 extend-filesystems[1522]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 23:39:16.761024 (ntainerd)[1521]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 23:39:16.765840 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 5 23:39:16.769282 jq[1520]: true Nov 5 23:39:16.769687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:39:16.770199 update_engine[1508]: I20251105 23:39:16.768195 1508 main.cc:92] Flatcar Update Engine starting Nov 5 23:39:16.792240 tar[1517]: linux-arm64/LICENSE Nov 5 23:39:16.792573 tar[1517]: linux-arm64/helm Nov 5 23:39:16.792535 dbus-daemon[1490]: [system] SELinux support is enabled Nov 5 23:39:16.792749 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 23:39:16.798248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 23:39:16.798285 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 23:39:16.800298 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 23:39:16.800325 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 23:39:16.803217 update_engine[1508]: I20251105 23:39:16.803043 1508 update_check_scheduler.cc:74] Next update check in 3m13s Nov 5 23:39:16.805977 systemd[1]: Started update-engine.service - Update Engine. Nov 5 23:39:16.808461 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 5 23:39:16.810694 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 23:39:16.823601 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 23:39:16.823601 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 23:39:16.823601 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 5 23:39:16.836521 extend-filesystems[1493]: Resized filesystem in /dev/vda9 Nov 5 23:39:16.842766 bash[1551]: Updated "/home/core/.ssh/authorized_keys" Nov 5 23:39:16.826733 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 23:39:16.829587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 23:39:16.841673 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 23:39:16.841952 systemd-logind[1503]: New seat seat0. Nov 5 23:39:16.867186 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 23:39:16.870429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:39:16.872248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 23:39:16.879212 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 23:39:16.905091 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 23:39:16.982789 containerd[1521]: time="2025-11-05T23:39:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 23:39:16.983767 containerd[1521]: time="2025-11-05T23:39:16.983708520Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 23:39:17.002226 containerd[1521]: time="2025-11-05T23:39:17.002163678Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.778µs" Nov 5 23:39:17.002330 containerd[1521]: time="2025-11-05T23:39:17.002211993Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 23:39:17.002330 containerd[1521]: time="2025-11-05T23:39:17.002280259Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 23:39:17.002563 containerd[1521]: time="2025-11-05T23:39:17.002521555Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 23:39:17.002628 containerd[1521]: time="2025-11-05T23:39:17.002607729Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 23:39:17.002683 containerd[1521]: time="2025-11-05T23:39:17.002667662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:39:17.002826 containerd[1521]: time="2025-11-05T23:39:17.002804354Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:39:17.002846 containerd[1521]: time="2025-11-05T23:39:17.002826950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:39:17.003347 containerd[1521]: time="2025-11-05T23:39:17.003298963Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:39:17.003347 containerd[1521]: time="2025-11-05T23:39:17.003331093Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:39:17.003402 containerd[1521]: time="2025-11-05T23:39:17.003346397Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:39:17.003402 containerd[1521]: time="2025-11-05T23:39:17.003355612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 23:39:17.003583 containerd[1521]: time="2025-11-05T23:39:17.003517704Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 23:39:17.004028 containerd[1521]: time="2025-11-05T23:39:17.003933630Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:39:17.004061 containerd[1521]: time="2025-11-05T23:39:17.004043361Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:39:17.004088 containerd[1521]: time="2025-11-05T23:39:17.004061509Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 23:39:17.004108 containerd[1521]: time="2025-11-05T23:39:17.004094160Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 23:39:17.004544 containerd[1521]: time="2025-11-05T23:39:17.004512530Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 23:39:17.004718 containerd[1521]: time="2025-11-05T23:39:17.004696336Z" level=info msg="metadata content store policy set" policy=shared Nov 5 23:39:17.009794 containerd[1521]: time="2025-11-05T23:39:17.009733729Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009812251Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009829317Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009842257Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009855839Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009870702Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009883321Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 23:39:17.009892 containerd[1521]: time="2025-11-05T23:39:17.009894659Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 23:39:17.010004 containerd[1521]: time="2025-11-05T23:39:17.009915932Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 23:39:17.010004 containerd[1521]: time="2025-11-05T23:39:17.009927550Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 23:39:17.010004 containerd[1521]: time="2025-11-05T23:39:17.009937485Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 23:39:17.010004 containerd[1521]: time="2025-11-05T23:39:17.009951107Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 23:39:17.010133 containerd[1521]: time="2025-11-05T23:39:17.010106989Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 23:39:17.010461 containerd[1521]: time="2025-11-05T23:39:17.010380214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 23:39:17.010610 containerd[1521]: time="2025-11-05T23:39:17.010549517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 23:39:17.010724 containerd[1521]: time="2025-11-05T23:39:17.010649633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 23:39:17.010869 containerd[1521]: time="2025-11-05T23:39:17.010816532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 23:39:17.010923 containerd[1521]: time="2025-11-05T23:39:17.010908515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 23:39:17.010983 containerd[1521]: time="2025-11-05T23:39:17.010968368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 23:39:17.011078 containerd[1521]: time="2025-11-05T23:39:17.011028221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 23:39:17.011310 containerd[1521]: time="2025-11-05T23:39:17.011290750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 23:39:17.011335 containerd[1521]: time="2025-11-05T23:39:17.011322920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 23:39:17.011354 containerd[1521]: time="2025-11-05T23:39:17.011340747Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 23:39:17.011583 containerd[1521]: time="2025-11-05T23:39:17.011568381Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 23:39:17.011610 containerd[1521]: time="2025-11-05T23:39:17.011591657Z" level=info msg="Start snapshots syncer" Nov 5 23:39:17.011645 containerd[1521]: time="2025-11-05T23:39:17.011622145Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 23:39:17.011891 containerd[1521]: time="2025-11-05T23:39:17.011846734Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 23:39:17.011999 containerd[1521]: time="2025-11-05T23:39:17.011910353Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 23:39:17.011999 containerd[1521]: time="2025-11-05T23:39:17.011986191Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 23:39:17.012157 containerd[1521]: time="2025-11-05T23:39:17.012134822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 23:39:17.012223 containerd[1521]: time="2025-11-05T23:39:17.012170958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 23:39:17.012223 containerd[1521]: time="2025-11-05T23:39:17.012186783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 23:39:17.012223 containerd[1521]: time="2025-11-05T23:39:17.012199322Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 23:39:17.012223 containerd[1521]: time="2025-11-05T23:39:17.012212543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 23:39:17.012223 containerd[1521]: time="2025-11-05T23:39:17.012223600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 23:39:17.012307 containerd[1521]: time="2025-11-05T23:39:17.012239264Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 23:39:17.012307 containerd[1521]: time="2025-11-05T23:39:17.012267348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 23:39:17.012307 containerd[1521]: time="2025-11-05T23:39:17.012278606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 23:39:17.012307 containerd[1521]: time="2025-11-05T23:39:17.012290304Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 23:39:17.012477 containerd[1521]: time="2025-11-05T23:39:17.012330086Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:39:17.012477 containerd[1521]: time="2025-11-05T23:39:17.012346431Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:39:17.012477 containerd[1521]: time="2025-11-05T23:39:17.012357408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:39:17.012477 containerd[1521]: time="2025-11-05T23:39:17.012367664Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:39:17.012477 containerd[1521]: time="2025-11-05T23:39:17.012375556Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 23:39:17.012581 containerd[1521]: time="2025-11-05T23:39:17.012385452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 23:39:17.012581 containerd[1521]: time="2025-11-05T23:39:17.012549186Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 23:39:17.012666 containerd[1521]: time="2025-11-05T23:39:17.012641209Z" level=info msg="runtime interface created" Nov 5 23:39:17.012716 containerd[1521]: time="2025-11-05T23:39:17.012652467Z" level=info msg="created NRI interface" Nov 5 23:39:17.012835 containerd[1521]: time="2025-11-05T23:39:17.012778823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 23:39:17.012934 containerd[1521]: time="2025-11-05T23:39:17.012885950Z" level=info msg="Connect containerd service" Nov 5 23:39:17.013024 containerd[1521]: time="2025-11-05T23:39:17.013009141Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 23:39:17.016365 containerd[1521]: time="2025-11-05T23:39:17.016320731Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 23:39:17.100600 tar[1517]: linux-arm64/README.md Nov 5 23:39:17.106322 containerd[1521]: time="2025-11-05T23:39:17.106266625Z" level=info msg="Start subscribing containerd event" Nov 5 23:39:17.106583 containerd[1521]: time="2025-11-05T23:39:17.106477192Z" level=info msg="Start recovering state" Nov 5 23:39:17.106696 containerd[1521]: time="2025-11-05T23:39:17.106648378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 23:39:17.106758 containerd[1521]: time="2025-11-05T23:39:17.106723455Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108621446Z" level=info msg="Start event monitor" Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108668720Z" level=info msg="Start cni network conf syncer for default" Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108678535Z" level=info msg="Start streaming server" Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108693118Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108701651Z" level=info msg="runtime interface starting up..." Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108708742Z" level=info msg="starting plugins..." Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108728973Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 23:39:17.109044 containerd[1521]: time="2025-11-05T23:39:17.108884135Z" level=info msg="containerd successfully booted in 0.126508s" Nov 5 23:39:17.109455 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 23:39:17.115453 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 23:39:17.723993 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 23:39:17.742948 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 23:39:17.746520 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 23:39:17.761956 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 23:39:17.762158 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 23:39:17.766554 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 23:39:17.784464 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 23:39:17.787534 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 23:39:17.789679 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 23:39:17.791156 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 23:39:17.958576 systemd-networkd[1427]: eth0: Gained IPv6LL Nov 5 23:39:17.960906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 23:39:17.962773 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 23:39:17.965249 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 23:39:17.967695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:17.985291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 23:39:18.005623 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 23:39:18.008058 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 23:39:18.008534 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 23:39:18.011665 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 23:39:18.586762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:18.588497 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 23:39:18.589917 systemd[1]: Startup finished in 2.067s (kernel) + 5.298s (initrd) + 3.729s (userspace) = 11.095s. Nov 5 23:39:18.590587 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:39:18.946891 kubelet[1628]: E1105 23:39:18.946783 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:39:18.949427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:39:18.949563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:39:18.949887 systemd[1]: kubelet.service: Consumed 757ms CPU time, 257.5M memory peak. Nov 5 23:39:22.816985 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 23:39:22.818026 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888). Nov 5 23:39:22.893184 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:22.895076 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:22.901978 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 23:39:22.902855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 23:39:22.909039 systemd-logind[1503]: New session 1 of user core. Nov 5 23:39:22.923518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 23:39:22.925925 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 23:39:22.947476 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 23:39:22.952819 systemd-logind[1503]: New session c1 of user core. Nov 5 23:39:23.063023 systemd[1647]: Queued start job for default target default.target. Nov 5 23:39:23.073438 systemd[1647]: Created slice app.slice - User Application Slice. Nov 5 23:39:23.073465 systemd[1647]: Reached target paths.target - Paths. Nov 5 23:39:23.073504 systemd[1647]: Reached target timers.target - Timers. Nov 5 23:39:23.074734 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 23:39:23.085443 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 23:39:23.085726 systemd[1647]: Reached target sockets.target - Sockets. Nov 5 23:39:23.085893 systemd[1647]: Reached target basic.target - Basic System. Nov 5 23:39:23.086002 systemd[1647]: Reached target default.target - Main User Target. Nov 5 23:39:23.086060 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 23:39:23.086152 systemd[1647]: Startup finished in 126ms. Nov 5 23:39:23.087407 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 23:39:23.153621 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:32898.service - OpenSSH per-connection server daemon (10.0.0.1:32898). Nov 5 23:39:23.216649 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 32898 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:23.218237 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:23.222406 systemd-logind[1503]: New session 2 of user core. Nov 5 23:39:23.242847 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 23:39:23.296166 sshd[1661]: Connection closed by 10.0.0.1 port 32898 Nov 5 23:39:23.296579 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Nov 5 23:39:23.308408 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:32898.service: Deactivated successfully. Nov 5 23:39:23.310727 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 23:39:23.311417 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Nov 5 23:39:23.316682 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:32906.service - OpenSSH per-connection server daemon (10.0.0.1:32906). Nov 5 23:39:23.317364 systemd-logind[1503]: Removed session 2. Nov 5 23:39:23.380661 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 32906 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:23.381944 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:23.386353 systemd-logind[1503]: New session 3 of user core. Nov 5 23:39:23.400844 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 23:39:23.449313 sshd[1670]: Connection closed by 10.0.0.1 port 32906 Nov 5 23:39:23.449168 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Nov 5 23:39:23.463862 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:32906.service: Deactivated successfully. Nov 5 23:39:23.465694 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 23:39:23.467076 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Nov 5 23:39:23.469227 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:32922.service - OpenSSH per-connection server daemon (10.0.0.1:32922). Nov 5 23:39:23.470255 systemd-logind[1503]: Removed session 3. Nov 5 23:39:23.534347 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 32922 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:23.535867 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:23.540794 systemd-logind[1503]: New session 4 of user core. Nov 5 23:39:23.556609 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 23:39:23.609852 sshd[1680]: Connection closed by 10.0.0.1 port 32922 Nov 5 23:39:23.610568 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Nov 5 23:39:23.624157 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:32922.service: Deactivated successfully. Nov 5 23:39:23.627908 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 23:39:23.628738 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Nov 5 23:39:23.630894 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:32928.service - OpenSSH per-connection server daemon (10.0.0.1:32928). Nov 5 23:39:23.631805 systemd-logind[1503]: Removed session 4. Nov 5 23:39:23.694381 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 32928 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:23.695727 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:23.700492 systemd-logind[1503]: New session 5 of user core. Nov 5 23:39:23.716716 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 23:39:23.773472 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 23:39:23.773781 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:39:23.795736 sudo[1690]: pam_unix(sudo:session): session closed for user root Nov 5 23:39:23.798347 sshd[1689]: Connection closed by 10.0.0.1 port 32928 Nov 5 23:39:23.798741 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Nov 5 23:39:23.812411 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:32928.service: Deactivated successfully. Nov 5 23:39:23.814451 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 23:39:23.816277 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Nov 5 23:39:23.819915 systemd-logind[1503]: Removed session 5. Nov 5 23:39:23.821198 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:32944.service - OpenSSH per-connection server daemon (10.0.0.1:32944). Nov 5 23:39:23.873146 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 32944 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:23.874921 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:23.879411 systemd-logind[1503]: New session 6 of user core. Nov 5 23:39:23.890602 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 23:39:23.948182 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 23:39:23.948467 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:39:24.042034 sudo[1701]: pam_unix(sudo:session): session closed for user root Nov 5 23:39:24.047428 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 23:39:24.047707 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:39:24.057659 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:39:24.103756 augenrules[1723]: No rules Nov 5 23:39:24.105110 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:39:24.105321 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:39:24.106320 sudo[1700]: pam_unix(sudo:session): session closed for user root Nov 5 23:39:24.107720 sshd[1699]: Connection closed by 10.0.0.1 port 32944 Nov 5 23:39:24.108597 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Nov 5 23:39:24.121979 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:32944.service: Deactivated successfully. Nov 5 23:39:24.124521 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 23:39:24.128561 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Nov 5 23:39:24.128984 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:32956.service - OpenSSH per-connection server daemon (10.0.0.1:32956). Nov 5 23:39:24.133311 systemd-logind[1503]: Removed session 6. Nov 5 23:39:24.201468 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 32956 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:39:24.203427 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:39:24.209429 systemd-logind[1503]: New session 7 of user core. Nov 5 23:39:24.222639 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 23:39:24.274561 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 23:39:24.274812 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:39:24.570785 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 23:39:24.589807 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 23:39:24.791682 dockerd[1757]: time="2025-11-05T23:39:24.791614977Z" level=info msg="Starting up" Nov 5 23:39:24.792778 dockerd[1757]: time="2025-11-05T23:39:24.792751702Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 23:39:24.805139 dockerd[1757]: time="2025-11-05T23:39:24.804998990Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 23:39:24.840080 dockerd[1757]: time="2025-11-05T23:39:24.839936627Z" level=info msg="Loading containers: start." Nov 5 23:39:24.851507 kernel: Initializing XFRM netlink socket Nov 5 23:39:25.076507 systemd-networkd[1427]: docker0: Link UP Nov 5 23:39:25.080454 dockerd[1757]: time="2025-11-05T23:39:25.080407081Z" level=info msg="Loading containers: done." Nov 5 23:39:25.095439 dockerd[1757]: time="2025-11-05T23:39:25.095314922Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 23:39:25.095439 dockerd[1757]: time="2025-11-05T23:39:25.095424423Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 23:39:25.095608 dockerd[1757]: time="2025-11-05T23:39:25.095515027Z" level=info msg="Initializing buildkit" Nov 5 23:39:25.127916 dockerd[1757]: time="2025-11-05T23:39:25.127859376Z" level=info msg="Completed buildkit initialization" Nov 5 23:39:25.134896 dockerd[1757]: time="2025-11-05T23:39:25.134840217Z" level=info msg="Daemon has completed initialization" Nov 5 23:39:25.135008 dockerd[1757]: time="2025-11-05T23:39:25.134915247Z" level=info msg="API listen on /run/docker.sock" Nov 5 23:39:25.135105 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 23:39:25.666482 containerd[1521]: time="2025-11-05T23:39:25.666441397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 23:39:26.311420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991197465.mount: Deactivated successfully. Nov 5 23:39:27.402282 containerd[1521]: time="2025-11-05T23:39:27.402222588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:27.405582 containerd[1521]: time="2025-11-05T23:39:27.405533330Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Nov 5 23:39:27.406823 containerd[1521]: time="2025-11-05T23:39:27.406791517Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:27.410549 containerd[1521]: time="2025-11-05T23:39:27.410507589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:27.411928 containerd[1521]: time="2025-11-05T23:39:27.411884633Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.745399357s" Nov 5 23:39:27.411928 containerd[1521]: time="2025-11-05T23:39:27.411929590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 5 23:39:27.413167 containerd[1521]: time="2025-11-05T23:39:27.413127968Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 23:39:28.691074 containerd[1521]: time="2025-11-05T23:39:28.691025734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:28.691854 containerd[1521]: time="2025-11-05T23:39:28.691482203Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Nov 5 23:39:28.692545 containerd[1521]: time="2025-11-05T23:39:28.692515634Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:28.695317 containerd[1521]: time="2025-11-05T23:39:28.695281911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:28.696404 containerd[1521]: time="2025-11-05T23:39:28.696353611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.283188133s" Nov 5 23:39:28.696555 containerd[1521]: time="2025-11-05T23:39:28.696386956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 5 23:39:28.696917 containerd[1521]: time="2025-11-05T23:39:28.696896586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 23:39:29.011963 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 23:39:29.013338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:29.167949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:29.171984 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:39:29.206285 kubelet[2045]: E1105 23:39:29.206226 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:39:29.209600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:39:29.209731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:39:29.211543 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107M memory peak. Nov 5 23:39:30.010269 containerd[1521]: time="2025-11-05T23:39:30.010221551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:30.011052 containerd[1521]: time="2025-11-05T23:39:30.010986145Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Nov 5 23:39:30.011696 containerd[1521]: time="2025-11-05T23:39:30.011671646Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:30.014784 containerd[1521]: time="2025-11-05T23:39:30.014731584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:30.016011 containerd[1521]: time="2025-11-05T23:39:30.015772444Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.318768696s" Nov 5 23:39:30.016011 containerd[1521]: time="2025-11-05T23:39:30.015804385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 5 23:39:30.016313 containerd[1521]: time="2025-11-05T23:39:30.016290232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 23:39:30.966244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804247044.mount: Deactivated successfully. Nov 5 23:39:31.211938 containerd[1521]: time="2025-11-05T23:39:31.211870360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:31.212405 containerd[1521]: time="2025-11-05T23:39:31.212355946Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Nov 5 23:39:31.213434 containerd[1521]: time="2025-11-05T23:39:31.213380712Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:31.215427 containerd[1521]: time="2025-11-05T23:39:31.215375890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:31.216342 containerd[1521]: time="2025-11-05T23:39:31.216287945Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.199960889s" Nov 5 23:39:31.216342 containerd[1521]: time="2025-11-05T23:39:31.216329491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 5 23:39:31.217148 containerd[1521]: time="2025-11-05T23:39:31.216941277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 23:39:31.779572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320709968.mount: Deactivated successfully. Nov 5 23:39:33.359874 containerd[1521]: time="2025-11-05T23:39:33.359812965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:33.372585 containerd[1521]: time="2025-11-05T23:39:33.372539218Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Nov 5 23:39:33.386710 containerd[1521]: time="2025-11-05T23:39:33.386664486Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:33.398813 containerd[1521]: time="2025-11-05T23:39:33.398740739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:33.399916 containerd[1521]: time="2025-11-05T23:39:33.399882812Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.182874576s" Nov 5 23:39:33.399987 containerd[1521]: time="2025-11-05T23:39:33.399920193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 5 23:39:33.400555 containerd[1521]: time="2025-11-05T23:39:33.400477221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 23:39:33.832533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344472614.mount: Deactivated successfully. Nov 5 23:39:33.836946 containerd[1521]: time="2025-11-05T23:39:33.836896286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:39:33.837443 containerd[1521]: time="2025-11-05T23:39:33.837410051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 5 23:39:33.838490 containerd[1521]: time="2025-11-05T23:39:33.838454230Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:39:33.840617 containerd[1521]: time="2025-11-05T23:39:33.840580608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:39:33.841539 containerd[1521]: time="2025-11-05T23:39:33.841362802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 440.855084ms" Nov 5 23:39:33.841539 containerd[1521]: time="2025-11-05T23:39:33.841406186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 23:39:33.841912 containerd[1521]: time="2025-11-05T23:39:33.841875886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 23:39:34.317775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818881138.mount: Deactivated successfully. Nov 5 23:39:36.243432 containerd[1521]: time="2025-11-05T23:39:36.243043347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:36.244236 containerd[1521]: time="2025-11-05T23:39:36.243571908Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Nov 5 23:39:36.245011 containerd[1521]: time="2025-11-05T23:39:36.244967266Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:36.247860 containerd[1521]: time="2025-11-05T23:39:36.247821289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:36.249096 containerd[1521]: time="2025-11-05T23:39:36.249061816Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.407145788s" Nov 5 23:39:36.249096 containerd[1521]: time="2025-11-05T23:39:36.249092510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 5 23:39:39.261986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 23:39:39.263633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:39.433970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:39.439253 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:39:39.478174 kubelet[2208]: E1105 23:39:39.478099 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:39:39.480835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:39:39.481085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:39:39.481471 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.8M memory peak. Nov 5 23:39:42.680509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:42.680654 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.8M memory peak. Nov 5 23:39:42.682830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:42.703284 systemd[1]: Reload requested from client PID 2224 ('systemctl') (unit session-7.scope)... Nov 5 23:39:42.703299 systemd[1]: Reloading... Nov 5 23:39:42.763732 zram_generator::config[2267]: No configuration found. Nov 5 23:39:43.028575 systemd[1]: Reloading finished in 324 ms. Nov 5 23:39:43.079072 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 23:39:43.079164 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 23:39:43.079472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:43.079532 systemd[1]: kubelet.service: Consumed 98ms CPU time, 94.9M memory peak. Nov 5 23:39:43.081255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:43.229622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:43.234448 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:39:43.271111 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:39:43.271111 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:39:43.271111 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:39:43.271499 kubelet[2311]: I1105 23:39:43.271160 2311 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:39:44.034637 kubelet[2311]: I1105 23:39:44.034588 2311 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 23:39:44.035422 kubelet[2311]: I1105 23:39:44.034788 2311 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:39:44.035422 kubelet[2311]: I1105 23:39:44.035045 2311 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:39:44.058865 kubelet[2311]: E1105 23:39:44.058811 2311 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 23:39:44.060492 kubelet[2311]: I1105 23:39:44.060459 2311 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:39:44.069220 kubelet[2311]: I1105 23:39:44.069172 2311 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:39:44.072458 kubelet[2311]: I1105 23:39:44.072422 2311 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 23:39:44.073664 kubelet[2311]: I1105 23:39:44.073598 2311 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:39:44.073886 kubelet[2311]: I1105 23:39:44.073656 2311 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:39:44.074004 kubelet[2311]: I1105 23:39:44.073942 2311 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:39:44.074004 kubelet[2311]: I1105 23:39:44.073953 2311 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 23:39:44.074201 kubelet[2311]: I1105 23:39:44.074169 2311 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:39:44.076955 kubelet[2311]: I1105 23:39:44.076922 2311 kubelet.go:480] "Attempting to sync node with API server" Nov 5 23:39:44.076989 kubelet[2311]: I1105 23:39:44.076959 2311 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:39:44.077029 kubelet[2311]: I1105 23:39:44.076992 2311 kubelet.go:386] "Adding apiserver pod source" Nov 5 23:39:44.078057 kubelet[2311]: I1105 23:39:44.078025 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:39:44.079695 kubelet[2311]: I1105 23:39:44.079642 2311 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:39:44.080154 kubelet[2311]: E1105 23:39:44.080090 2311 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:39:44.080527 kubelet[2311]: I1105 23:39:44.080501 2311 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:39:44.080659 kubelet[2311]: W1105 23:39:44.080639 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 23:39:44.081119 kubelet[2311]: E1105 23:39:44.081064 2311 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:39:44.083457 kubelet[2311]: I1105 23:39:44.083433 2311 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 23:39:44.083726 kubelet[2311]: I1105 23:39:44.083484 2311 server.go:1289] "Started kubelet" Nov 5 23:39:44.085629 kubelet[2311]: I1105 23:39:44.085286 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:39:44.088257 kubelet[2311]: I1105 23:39:44.088223 2311 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 23:39:44.088652 kubelet[2311]: E1105 23:39:44.088621 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:39:44.089541 kubelet[2311]: I1105 23:39:44.088628 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:39:44.089757 kubelet[2311]: I1105 23:39:44.089728 2311 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 23:39:44.089757 kubelet[2311]: I1105 23:39:44.088239 2311 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:39:44.089969 kubelet[2311]: I1105 23:39:44.089945 2311 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:39:44.090023 kubelet[2311]: I1105 23:39:44.088892 2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:39:44.090169 kubelet[2311]: E1105 23:39:44.090142 2311 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:39:44.090723 kubelet[2311]: I1105 23:39:44.090692 2311 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:39:44.090872 kubelet[2311]: I1105 23:39:44.090843 2311 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:39:44.092658 kubelet[2311]: I1105 23:39:44.092624 2311 server.go:317] "Adding debug handlers to kubelet server" Nov 5 23:39:44.094200 kubelet[2311]: E1105 23:39:44.090370 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187540b70a6be2ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 23:39:44.083452652 +0000 UTC m=+0.845209041,LastTimestamp:2025-11-05 23:39:44.083452652 +0000 UTC m=+0.845209041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 23:39:44.094529 kubelet[2311]: I1105 23:39:44.094418 2311 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:39:44.094874 kubelet[2311]: I1105 23:39:44.094848 2311 reconciler.go:26] "Reconciler: start to sync state" Nov 5 23:39:44.096163 kubelet[2311]: E1105 23:39:44.096109 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Nov 5 23:39:44.098448 kubelet[2311]: E1105 23:39:44.098413 2311 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:39:44.109411 kubelet[2311]: I1105 23:39:44.109057 2311 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:39:44.109411 kubelet[2311]: I1105 23:39:44.109076 2311 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:39:44.109411 kubelet[2311]: I1105 23:39:44.109097 2311 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:39:44.190510 kubelet[2311]: E1105 23:39:44.190432 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:39:44.197880 kubelet[2311]: I1105 23:39:44.197588 2311 policy_none.go:49] "None policy: Start" Nov 5 23:39:44.197880 kubelet[2311]: I1105 23:39:44.197624 2311 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 23:39:44.197880 kubelet[2311]: I1105 23:39:44.197643 2311 state_mem.go:35] "Initializing new in-memory state store" Nov 5 23:39:44.201357 kubelet[2311]: I1105 23:39:44.201279 2311 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 23:39:44.202769 kubelet[2311]: I1105 23:39:44.202737 2311 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 23:39:44.203474 kubelet[2311]: I1105 23:39:44.202911 2311 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 23:39:44.203474 kubelet[2311]: I1105 23:39:44.202943 2311 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:39:44.203474 kubelet[2311]: I1105 23:39:44.202952 2311 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 23:39:44.203474 kubelet[2311]: E1105 23:39:44.202999 2311 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:39:44.203822 kubelet[2311]: E1105 23:39:44.203795 2311 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:39:44.206911 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 23:39:44.219831 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 23:39:44.223207 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 23:39:44.243551 kubelet[2311]: E1105 23:39:44.243499 2311 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:39:44.243918 kubelet[2311]: I1105 23:39:44.243755 2311 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:39:44.243918 kubelet[2311]: I1105 23:39:44.243774 2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:39:44.244058 kubelet[2311]: I1105 23:39:44.244032 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:39:44.250032 kubelet[2311]: E1105 23:39:44.249927 2311 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:39:44.250223 kubelet[2311]: E1105 23:39:44.250179 2311 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 23:39:44.297099 kubelet[2311]: E1105 23:39:44.296967 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Nov 5 23:39:44.317180 systemd[1]: Created slice kubepods-burstable-pod26bc0cc208c578f64b535830abec924f.slice - libcontainer container kubepods-burstable-pod26bc0cc208c578f64b535830abec924f.slice. Nov 5 23:39:44.333413 kubelet[2311]: E1105 23:39:44.332437 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:44.334971 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 23:39:44.345659 kubelet[2311]: I1105 23:39:44.345624 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:39:44.346241 kubelet[2311]: E1105 23:39:44.346190 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Nov 5 23:39:44.353975 kubelet[2311]: E1105 23:39:44.353798 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:44.356599 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 23:39:44.358484 kubelet[2311]: E1105 23:39:44.358447 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:44.396790 kubelet[2311]: I1105 23:39:44.396697 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:44.396790 kubelet[2311]: I1105 23:39:44.396767 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:44.396790 kubelet[2311]: I1105 23:39:44.396792 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:44.397006 kubelet[2311]: I1105 23:39:44.396814 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26bc0cc208c578f64b535830abec924f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"26bc0cc208c578f64b535830abec924f\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:44.397006 kubelet[2311]: I1105 23:39:44.396830 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:44.397006 kubelet[2311]: I1105 23:39:44.396847 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:44.397006 kubelet[2311]: I1105 23:39:44.396861 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26bc0cc208c578f64b535830abec924f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"26bc0cc208c578f64b535830abec924f\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:44.397006 kubelet[2311]: I1105 23:39:44.396878 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26bc0cc208c578f64b535830abec924f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"26bc0cc208c578f64b535830abec924f\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:44.397121 kubelet[2311]: I1105 23:39:44.396896 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:44.550527 kubelet[2311]: I1105 23:39:44.548533 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:39:44.550527 kubelet[2311]: E1105 23:39:44.548972 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Nov 5 23:39:44.635023 containerd[1521]: time="2025-11-05T23:39:44.634941371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:26bc0cc208c578f64b535830abec924f,Namespace:kube-system,Attempt:0,}" Nov 5 23:39:44.658215 containerd[1521]: time="2025-11-05T23:39:44.656386016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 23:39:44.659834 containerd[1521]: time="2025-11-05T23:39:44.659752614Z" level=info msg="connecting to shim 72a317ba411d1c4e9510ac528e27dc9acaabb3efd298b5b236f83be47dd5538d" address="unix:///run/containerd/s/4f112ab69646ca0b826ec98bb19e7d521ef7ceec7953c48918117c9fd042742a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:39:44.660529 containerd[1521]: time="2025-11-05T23:39:44.660457206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 23:39:44.697683 kubelet[2311]: E1105 23:39:44.697622 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Nov 5 23:39:44.698817 systemd[1]: Started cri-containerd-72a317ba411d1c4e9510ac528e27dc9acaabb3efd298b5b236f83be47dd5538d.scope - libcontainer container 72a317ba411d1c4e9510ac528e27dc9acaabb3efd298b5b236f83be47dd5538d. Nov 5 23:39:44.699850 containerd[1521]: time="2025-11-05T23:39:44.699809492Z" level=info msg="connecting to shim 8cf74841b7478f53a99c4dec57427ce7a8c2c0a0f03725ec01be235d48cbb45d" address="unix:///run/containerd/s/55890aae76394a951f12d00f231803d8b893ed153da4a4ab5108fec6594c8e73" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:39:44.704082 containerd[1521]: time="2025-11-05T23:39:44.704007076Z" level=info msg="connecting to shim de13052659191086f526685804db9e0db93f43d42d074b6fe33fb6850cc77260" address="unix:///run/containerd/s/2df288fea6b31c8f812fe5ca5aedd73208eb2cf92e3d1d25535991952bd58dd9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:39:44.730642 systemd[1]: Started cri-containerd-8cf74841b7478f53a99c4dec57427ce7a8c2c0a0f03725ec01be235d48cbb45d.scope - libcontainer container 8cf74841b7478f53a99c4dec57427ce7a8c2c0a0f03725ec01be235d48cbb45d. Nov 5 23:39:44.733736 systemd[1]: Started cri-containerd-de13052659191086f526685804db9e0db93f43d42d074b6fe33fb6850cc77260.scope - libcontainer container de13052659191086f526685804db9e0db93f43d42d074b6fe33fb6850cc77260. Nov 5 23:39:44.754301 containerd[1521]: time="2025-11-05T23:39:44.754240968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:26bc0cc208c578f64b535830abec924f,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a317ba411d1c4e9510ac528e27dc9acaabb3efd298b5b236f83be47dd5538d\"" Nov 5 23:39:44.763424 containerd[1521]: time="2025-11-05T23:39:44.762999355Z" level=info msg="CreateContainer within sandbox \"72a317ba411d1c4e9510ac528e27dc9acaabb3efd298b5b236f83be47dd5538d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 23:39:44.781367 containerd[1521]: time="2025-11-05T23:39:44.781314387Z" level=info msg="Container 36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:39:44.784046 containerd[1521]: time="2025-11-05T23:39:44.783991557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cf74841b7478f53a99c4dec57427ce7a8c2c0a0f03725ec01be235d48cbb45d\"" Nov 5 23:39:44.790421 containerd[1521]: time="2025-11-05T23:39:44.790317361Z" level=info msg="CreateContainer within sandbox \"72a317ba411d1c4e9510ac528e27dc9acaabb3efd298b5b236f83be47dd5538d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82\"" Nov 5 23:39:44.790858 containerd[1521]: time="2025-11-05T23:39:44.790826940Z" level=info msg="CreateContainer within sandbox \"8cf74841b7478f53a99c4dec57427ce7a8c2c0a0f03725ec01be235d48cbb45d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 23:39:44.792880 containerd[1521]: time="2025-11-05T23:39:44.792114131Z" level=info msg="StartContainer for \"36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82\"" Nov 5 23:39:44.794598 containerd[1521]: time="2025-11-05T23:39:44.794552436Z" level=info msg="connecting to shim 36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82" address="unix:///run/containerd/s/4f112ab69646ca0b826ec98bb19e7d521ef7ceec7953c48918117c9fd042742a" protocol=ttrpc version=3 Nov 5 23:39:44.795723 containerd[1521]: time="2025-11-05T23:39:44.795608644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"de13052659191086f526685804db9e0db93f43d42d074b6fe33fb6850cc77260\"" Nov 5 23:39:44.801406 containerd[1521]: time="2025-11-05T23:39:44.801082296Z" level=info msg="CreateContainer within sandbox \"de13052659191086f526685804db9e0db93f43d42d074b6fe33fb6850cc77260\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 23:39:44.804696 containerd[1521]: time="2025-11-05T23:39:44.804652149Z" level=info msg="Container 71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:39:44.815363 containerd[1521]: time="2025-11-05T23:39:44.815305132Z" level=info msg="Container ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:39:44.819619 systemd[1]: Started cri-containerd-36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82.scope - libcontainer container 36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82. Nov 5 23:39:44.820961 containerd[1521]: time="2025-11-05T23:39:44.820909580Z" level=info msg="CreateContainer within sandbox \"8cf74841b7478f53a99c4dec57427ce7a8c2c0a0f03725ec01be235d48cbb45d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd\"" Nov 5 23:39:44.821936 containerd[1521]: time="2025-11-05T23:39:44.821897649Z" level=info msg="StartContainer for \"71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd\"" Nov 5 23:39:44.823801 containerd[1521]: time="2025-11-05T23:39:44.823731829Z" level=info msg="connecting to shim 71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd" address="unix:///run/containerd/s/55890aae76394a951f12d00f231803d8b893ed153da4a4ab5108fec6594c8e73" protocol=ttrpc version=3 Nov 5 23:39:44.826219 containerd[1521]: time="2025-11-05T23:39:44.826164292Z" level=info msg="CreateContainer within sandbox \"de13052659191086f526685804db9e0db93f43d42d074b6fe33fb6850cc77260\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0\"" Nov 5 23:39:44.827496 containerd[1521]: time="2025-11-05T23:39:44.827455004Z" level=info msg="StartContainer for \"ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0\"" Nov 5 23:39:44.829454 containerd[1521]: time="2025-11-05T23:39:44.828658932Z" level=info msg="connecting to shim ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0" address="unix:///run/containerd/s/2df288fea6b31c8f812fe5ca5aedd73208eb2cf92e3d1d25535991952bd58dd9" protocol=ttrpc version=3 Nov 5 23:39:44.852688 systemd[1]: Started cri-containerd-71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd.scope - libcontainer container 71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd. Nov 5 23:39:44.857313 systemd[1]: Started cri-containerd-ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0.scope - libcontainer container ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0. Nov 5 23:39:44.878230 containerd[1521]: time="2025-11-05T23:39:44.878028189Z" level=info msg="StartContainer for \"36345d3152bf867d2f4d72fffcd78c9416501776771273ff6fcfaa54670d3e82\" returns successfully" Nov 5 23:39:44.911374 containerd[1521]: time="2025-11-05T23:39:44.911303859Z" level=info msg="StartContainer for \"71eb57b94133c7f80c8a6f5d6cee9e1ff87b10bb9c52b794594e0cf416c836bd\" returns successfully" Nov 5 23:39:44.920073 containerd[1521]: time="2025-11-05T23:39:44.919856550Z" level=info msg="StartContainer for \"ea52fe252fc7a9af0633f3f63d138efa78e580246b056577f7c0c180b67bffe0\" returns successfully" Nov 5 23:39:44.950994 kubelet[2311]: I1105 23:39:44.950929 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:39:44.951462 kubelet[2311]: E1105 23:39:44.951347 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Nov 5 23:39:45.215973 kubelet[2311]: E1105 23:39:45.215269 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:45.216234 kubelet[2311]: E1105 23:39:45.216199 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:45.219884 kubelet[2311]: E1105 23:39:45.219854 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:45.753847 kubelet[2311]: I1105 23:39:45.753810 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:39:46.226310 kubelet[2311]: E1105 23:39:46.226271 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:46.226660 kubelet[2311]: E1105 23:39:46.226637 2311 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:39:46.311136 kubelet[2311]: E1105 23:39:46.311087 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 23:39:46.440150 kubelet[2311]: I1105 23:39:46.440103 2311 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 23:39:46.489949 kubelet[2311]: I1105 23:39:46.489447 2311 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:46.501545 kubelet[2311]: E1105 23:39:46.501488 2311 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:46.501545 kubelet[2311]: I1105 23:39:46.501532 2311 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:46.505166 kubelet[2311]: E1105 23:39:46.505113 2311 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:46.505166 kubelet[2311]: I1105 23:39:46.505152 2311 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:46.508988 kubelet[2311]: E1105 23:39:46.508925 2311 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:47.079974 kubelet[2311]: I1105 23:39:47.079939 2311 apiserver.go:52] "Watching apiserver" Nov 5 23:39:47.090314 kubelet[2311]: I1105 23:39:47.090258 2311 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 23:39:48.046896 kubelet[2311]: I1105 23:39:48.046380 2311 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:48.677215 systemd[1]: Reload requested from client PID 2598 ('systemctl') (unit session-7.scope)... Nov 5 23:39:48.677233 systemd[1]: Reloading... Nov 5 23:39:48.752452 zram_generator::config[2644]: No configuration found. Nov 5 23:39:48.932878 systemd[1]: Reloading finished in 255 ms. Nov 5 23:39:48.964014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:48.987918 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 23:39:48.988208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:48.988281 systemd[1]: kubelet.service: Consumed 1.280s CPU time, 128.4M memory peak. Nov 5 23:39:48.990441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:39:49.162667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:39:49.167760 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:39:49.208031 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:39:49.208031 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:39:49.208031 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:39:49.208424 kubelet[2683]: I1105 23:39:49.208000 2683 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:39:49.216511 kubelet[2683]: I1105 23:39:49.216457 2683 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 23:39:49.216511 kubelet[2683]: I1105 23:39:49.216497 2683 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:39:49.216811 kubelet[2683]: I1105 23:39:49.216784 2683 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:39:49.219590 kubelet[2683]: I1105 23:39:49.219112 2683 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 23:39:49.225437 kubelet[2683]: I1105 23:39:49.225372 2683 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:39:49.230023 kubelet[2683]: I1105 23:39:49.229984 2683 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:39:49.233489 kubelet[2683]: I1105 23:39:49.232937 2683 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 23:39:49.233489 kubelet[2683]: I1105 23:39:49.233200 2683 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:39:49.233489 kubelet[2683]: I1105 23:39:49.233230 2683 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:39:49.233489 kubelet[2683]: I1105 23:39:49.233457 2683 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:39:49.233789 kubelet[2683]: I1105 23:39:49.233468 2683 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 23:39:49.233789 kubelet[2683]: I1105 23:39:49.233527 2683 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:39:49.233789 kubelet[2683]: I1105 23:39:49.233692 2683 kubelet.go:480] "Attempting to sync node with API server" Nov 5 23:39:49.233789 kubelet[2683]: I1105 23:39:49.233708 2683 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:39:49.233789 kubelet[2683]: I1105 23:39:49.233731 2683 kubelet.go:386] "Adding apiserver pod source" Nov 5 23:39:49.233789 kubelet[2683]: I1105 23:39:49.233746 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:39:49.234838 kubelet[2683]: I1105 23:39:49.234769 2683 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:39:49.235771 kubelet[2683]: I1105 23:39:49.235738 2683 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:39:49.239148 kubelet[2683]: I1105 23:39:49.239117 2683 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 23:39:49.239279 kubelet[2683]: I1105 23:39:49.239174 2683 server.go:1289] "Started kubelet" Nov 5 23:39:49.245630 kubelet[2683]: I1105 23:39:49.245555 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:39:49.245630 kubelet[2683]: I1105 23:39:49.245521 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:39:49.246411 kubelet[2683]: I1105 23:39:49.245448 2683 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:39:49.248925 kubelet[2683]: I1105 23:39:49.248899 2683 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 23:39:49.249815 kubelet[2683]: I1105 23:39:49.249051 2683 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 23:39:49.249929 kubelet[2683]: E1105 23:39:49.249093 2683 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:39:49.250485 kubelet[2683]: I1105 23:39:49.249231 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:39:49.250751 kubelet[2683]: I1105 23:39:49.250280 2683 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:39:49.252410 kubelet[2683]: I1105 23:39:49.252223 2683 server.go:317] "Adding debug handlers to kubelet server" Nov 5 23:39:49.253254 kubelet[2683]: I1105 23:39:49.253229 2683 reconciler.go:26] "Reconciler: start to sync state" Nov 5 23:39:49.267159 kubelet[2683]: I1105 23:39:49.267111 2683 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:39:49.267315 kubelet[2683]: I1105 23:39:49.267230 2683 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:39:49.269197 kubelet[2683]: I1105 23:39:49.269033 2683 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 23:39:49.270123 kubelet[2683]: E1105 23:39:49.270068 2683 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:39:49.270326 kubelet[2683]: I1105 23:39:49.270304 2683 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:39:49.270651 kubelet[2683]: I1105 23:39:49.270627 2683 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 23:39:49.270849 kubelet[2683]: I1105 23:39:49.270768 2683 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 23:39:49.270849 kubelet[2683]: I1105 23:39:49.270794 2683 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:39:49.270849 kubelet[2683]: I1105 23:39:49.270804 2683 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 23:39:49.271015 kubelet[2683]: E1105 23:39:49.270972 2683 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:39:49.309149 kubelet[2683]: I1105 23:39:49.309114 2683 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:39:49.309149 kubelet[2683]: I1105 23:39:49.309139 2683 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:39:49.309149 kubelet[2683]: I1105 23:39:49.309164 2683 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:39:49.309330 kubelet[2683]: I1105 23:39:49.309309 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 23:39:49.309330 kubelet[2683]: I1105 23:39:49.309318 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 23:39:49.309371 kubelet[2683]: I1105 23:39:49.309336 2683 policy_none.go:49] "None policy: Start" Nov 5 23:39:49.309371 kubelet[2683]: I1105 23:39:49.309345 2683 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 23:39:49.309371 kubelet[2683]: I1105 23:39:49.309353 2683 state_mem.go:35] "Initializing new in-memory state store" Nov 5 23:39:49.309481 kubelet[2683]: I1105 23:39:49.309457 2683 state_mem.go:75] "Updated machine memory state" Nov 5 23:39:49.313817 kubelet[2683]: E1105 23:39:49.313668 2683 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:39:49.314124 kubelet[2683]: I1105 23:39:49.313947 2683 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:39:49.314124 kubelet[2683]: I1105 23:39:49.313984 2683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:39:49.314200 kubelet[2683]: I1105 23:39:49.314149 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:39:49.316219 kubelet[2683]: E1105 23:39:49.316191 2683 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:39:49.372861 kubelet[2683]: I1105 23:39:49.372689 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:49.372861 kubelet[2683]: I1105 23:39:49.372828 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:49.373005 kubelet[2683]: I1105 23:39:49.372836 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:49.379992 kubelet[2683]: E1105 23:39:49.379944 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:49.417301 kubelet[2683]: I1105 23:39:49.417273 2683 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:39:49.426033 kubelet[2683]: I1105 23:39:49.425989 2683 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 23:39:49.426190 kubelet[2683]: I1105 23:39:49.426118 2683 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 23:39:49.454557 kubelet[2683]: I1105 23:39:49.454504 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26bc0cc208c578f64b535830abec924f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"26bc0cc208c578f64b535830abec924f\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:49.454557 kubelet[2683]: I1105 23:39:49.454553 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26bc0cc208c578f64b535830abec924f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"26bc0cc208c578f64b535830abec924f\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:49.454557 kubelet[2683]: I1105 23:39:49.454574 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26bc0cc208c578f64b535830abec924f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"26bc0cc208c578f64b535830abec924f\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:49.454748 kubelet[2683]: I1105 23:39:49.454594 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:49.454748 kubelet[2683]: I1105 23:39:49.454612 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:49.454748 kubelet[2683]: I1105 23:39:49.454629 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:49.454748 kubelet[2683]: I1105 23:39:49.454644 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:49.454748 kubelet[2683]: I1105 23:39:49.454659 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:49.454865 kubelet[2683]: I1105 23:39:49.454675 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:50.235443 kubelet[2683]: I1105 23:39:50.235380 2683 apiserver.go:52] "Watching apiserver" Nov 5 23:39:50.250519 kubelet[2683]: I1105 23:39:50.250473 2683 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 23:39:50.294149 kubelet[2683]: I1105 23:39:50.294002 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:50.294513 kubelet[2683]: I1105 23:39:50.294114 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:50.294513 kubelet[2683]: I1105 23:39:50.294206 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:50.302449 kubelet[2683]: E1105 23:39:50.302281 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:39:50.303490 kubelet[2683]: E1105 23:39:50.303453 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 23:39:50.303644 kubelet[2683]: E1105 23:39:50.303449 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 23:39:50.341283 kubelet[2683]: I1105 23:39:50.340800 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.340779176 podStartE2EDuration="1.340779176s" podCreationTimestamp="2025-11-05 23:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:39:50.320898377 +0000 UTC m=+1.148790521" watchObservedRunningTime="2025-11-05 23:39:50.340779176 +0000 UTC m=+1.168671320" Nov 5 23:39:50.350003 kubelet[2683]: I1105 23:39:50.349929 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3499104659999999 podStartE2EDuration="1.349910466s" podCreationTimestamp="2025-11-05 23:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:39:50.340739609 +0000 UTC m=+1.168631753" watchObservedRunningTime="2025-11-05 23:39:50.349910466 +0000 UTC m=+1.177802610" Nov 5 23:39:50.363208 kubelet[2683]: I1105 23:39:50.362770 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.362749122 podStartE2EDuration="2.362749122s" podCreationTimestamp="2025-11-05 23:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:39:50.350252929 +0000 UTC m=+1.178145073" watchObservedRunningTime="2025-11-05 23:39:50.362749122 +0000 UTC m=+1.190641266" Nov 5 23:39:54.272275 kubelet[2683]: I1105 23:39:54.272229 2683 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 23:39:54.273733 kubelet[2683]: I1105 23:39:54.273015 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 23:39:54.273806 containerd[1521]: time="2025-11-05T23:39:54.272694829Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 23:39:54.789411 kubelet[2683]: I1105 23:39:54.789286 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffjh\" (UniqueName: \"kubernetes.io/projected/8517c07f-d600-4acc-bbe3-cbb80ded45d6-kube-api-access-gffjh\") pod \"kube-proxy-5qmgz\" (UID: \"8517c07f-d600-4acc-bbe3-cbb80ded45d6\") " pod="kube-system/kube-proxy-5qmgz" Nov 5 23:39:54.789411 kubelet[2683]: I1105 23:39:54.789327 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8517c07f-d600-4acc-bbe3-cbb80ded45d6-kube-proxy\") pod \"kube-proxy-5qmgz\" (UID: \"8517c07f-d600-4acc-bbe3-cbb80ded45d6\") " pod="kube-system/kube-proxy-5qmgz" Nov 5 23:39:54.789411 kubelet[2683]: I1105 23:39:54.789347 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8517c07f-d600-4acc-bbe3-cbb80ded45d6-xtables-lock\") pod \"kube-proxy-5qmgz\" (UID: \"8517c07f-d600-4acc-bbe3-cbb80ded45d6\") " pod="kube-system/kube-proxy-5qmgz" Nov 5 23:39:54.789411 kubelet[2683]: I1105 23:39:54.789363 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8517c07f-d600-4acc-bbe3-cbb80ded45d6-lib-modules\") pod \"kube-proxy-5qmgz\" (UID: \"8517c07f-d600-4acc-bbe3-cbb80ded45d6\") " pod="kube-system/kube-proxy-5qmgz" Nov 5 23:39:54.790075 systemd[1]: Created slice kubepods-besteffort-pod8517c07f_d600_4acc_bbe3_cbb80ded45d6.slice - libcontainer container kubepods-besteffort-pod8517c07f_d600_4acc_bbe3_cbb80ded45d6.slice. Nov 5 23:39:54.898241 kubelet[2683]: E1105 23:39:54.898189 2683 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 23:39:54.898241 kubelet[2683]: E1105 23:39:54.898227 2683 projected.go:194] Error preparing data for projected volume kube-api-access-gffjh for pod kube-system/kube-proxy-5qmgz: configmap "kube-root-ca.crt" not found Nov 5 23:39:54.898432 kubelet[2683]: E1105 23:39:54.898288 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8517c07f-d600-4acc-bbe3-cbb80ded45d6-kube-api-access-gffjh podName:8517c07f-d600-4acc-bbe3-cbb80ded45d6 nodeName:}" failed. No retries permitted until 2025-11-05 23:39:55.398266147 +0000 UTC m=+6.226158291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gffjh" (UniqueName: "kubernetes.io/projected/8517c07f-d600-4acc-bbe3-cbb80ded45d6-kube-api-access-gffjh") pod "kube-proxy-5qmgz" (UID: "8517c07f-d600-4acc-bbe3-cbb80ded45d6") : configmap "kube-root-ca.crt" not found Nov 5 23:39:55.516737 systemd[1]: Created slice kubepods-besteffort-podc8980c5e_aaaa_40cb_a82e_26076b167cbe.slice - libcontainer container kubepods-besteffort-podc8980c5e_aaaa_40cb_a82e_26076b167cbe.slice. Nov 5 23:39:55.595727 kubelet[2683]: I1105 23:39:55.595657 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrzf\" (UniqueName: \"kubernetes.io/projected/c8980c5e-aaaa-40cb-a82e-26076b167cbe-kube-api-access-8mrzf\") pod \"tigera-operator-7dcd859c48-mnww7\" (UID: \"c8980c5e-aaaa-40cb-a82e-26076b167cbe\") " pod="tigera-operator/tigera-operator-7dcd859c48-mnww7" Nov 5 23:39:55.595727 kubelet[2683]: I1105 23:39:55.595723 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8980c5e-aaaa-40cb-a82e-26076b167cbe-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mnww7\" (UID: \"c8980c5e-aaaa-40cb-a82e-26076b167cbe\") " pod="tigera-operator/tigera-operator-7dcd859c48-mnww7" Nov 5 23:39:55.706018 containerd[1521]: time="2025-11-05T23:39:55.705697773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qmgz,Uid:8517c07f-d600-4acc-bbe3-cbb80ded45d6,Namespace:kube-system,Attempt:0,}" Nov 5 23:39:55.728967 containerd[1521]: time="2025-11-05T23:39:55.728904763Z" level=info msg="connecting to shim 23b97cc988268f6f20820d817b74ba088cd51304aad70187e087e0ba2d7b5e1d" address="unix:///run/containerd/s/4878e6a4bbfffe499efb0202775795c8e60e3a59cbda20e9caf27d3a2a5d6667" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:39:55.757644 systemd[1]: Started cri-containerd-23b97cc988268f6f20820d817b74ba088cd51304aad70187e087e0ba2d7b5e1d.scope - libcontainer container 23b97cc988268f6f20820d817b74ba088cd51304aad70187e087e0ba2d7b5e1d. Nov 5 23:39:55.782858 containerd[1521]: time="2025-11-05T23:39:55.782735099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qmgz,Uid:8517c07f-d600-4acc-bbe3-cbb80ded45d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"23b97cc988268f6f20820d817b74ba088cd51304aad70187e087e0ba2d7b5e1d\"" Nov 5 23:39:55.789251 containerd[1521]: time="2025-11-05T23:39:55.789201845Z" level=info msg="CreateContainer within sandbox \"23b97cc988268f6f20820d817b74ba088cd51304aad70187e087e0ba2d7b5e1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 23:39:55.799433 containerd[1521]: time="2025-11-05T23:39:55.798985357Z" level=info msg="Container 5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:39:55.806345 containerd[1521]: time="2025-11-05T23:39:55.806281295Z" level=info msg="CreateContainer within sandbox \"23b97cc988268f6f20820d817b74ba088cd51304aad70187e087e0ba2d7b5e1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8\"" Nov 5 23:39:55.807281 containerd[1521]: time="2025-11-05T23:39:55.807246664Z" level=info msg="StartContainer for \"5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8\"" Nov 5 23:39:55.809147 containerd[1521]: time="2025-11-05T23:39:55.809099672Z" level=info msg="connecting to shim 5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8" address="unix:///run/containerd/s/4878e6a4bbfffe499efb0202775795c8e60e3a59cbda20e9caf27d3a2a5d6667" protocol=ttrpc version=3 Nov 5 23:39:55.823095 containerd[1521]: time="2025-11-05T23:39:55.823043581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mnww7,Uid:c8980c5e-aaaa-40cb-a82e-26076b167cbe,Namespace:tigera-operator,Attempt:0,}" Nov 5 23:39:55.829639 systemd[1]: Started cri-containerd-5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8.scope - libcontainer container 5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8. Nov 5 23:39:55.855350 containerd[1521]: time="2025-11-05T23:39:55.855161606Z" level=info msg="connecting to shim 15c8f34f189050679065d465a0fbc1f8c949dbbf46a08b120628ee22096dbbf5" address="unix:///run/containerd/s/27e030222441a082ee246fb3f43a55aadefff7e3579f19dc9eef5b760d3c0b4a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:39:55.880622 systemd[1]: Started cri-containerd-15c8f34f189050679065d465a0fbc1f8c949dbbf46a08b120628ee22096dbbf5.scope - libcontainer container 15c8f34f189050679065d465a0fbc1f8c949dbbf46a08b120628ee22096dbbf5. Nov 5 23:39:55.888056 containerd[1521]: time="2025-11-05T23:39:55.888014450Z" level=info msg="StartContainer for \"5369de9d68b02fb67bb98266f3a9f88effb84db41d1318fa0748e0219910b7f8\" returns successfully" Nov 5 23:39:55.918457 containerd[1521]: time="2025-11-05T23:39:55.918362797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mnww7,Uid:c8980c5e-aaaa-40cb-a82e-26076b167cbe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"15c8f34f189050679065d465a0fbc1f8c949dbbf46a08b120628ee22096dbbf5\"" Nov 5 23:39:55.920376 containerd[1521]: time="2025-11-05T23:39:55.920332862Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 23:39:57.047232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522969620.mount: Deactivated successfully. Nov 5 23:39:57.378374 containerd[1521]: time="2025-11-05T23:39:57.377875610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:57.378991 containerd[1521]: time="2025-11-05T23:39:57.378941976Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 23:39:57.381576 containerd[1521]: time="2025-11-05T23:39:57.381548883Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:57.393085 containerd[1521]: time="2025-11-05T23:39:57.392977909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:39:57.393749 containerd[1521]: time="2025-11-05T23:39:57.393706635Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.473329728s" Nov 5 23:39:57.393749 containerd[1521]: time="2025-11-05T23:39:57.393735559Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 23:39:57.399931 containerd[1521]: time="2025-11-05T23:39:57.399871802Z" level=info msg="CreateContainer within sandbox \"15c8f34f189050679065d465a0fbc1f8c949dbbf46a08b120628ee22096dbbf5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 23:39:57.410298 containerd[1521]: time="2025-11-05T23:39:57.410227782Z" level=info msg="Container add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:39:57.416765 containerd[1521]: time="2025-11-05T23:39:57.416710985Z" level=info msg="CreateContainer within sandbox \"15c8f34f189050679065d465a0fbc1f8c949dbbf46a08b120628ee22096dbbf5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375\"" Nov 5 23:39:57.417759 containerd[1521]: time="2025-11-05T23:39:57.417407467Z" level=info msg="StartContainer for \"add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375\"" Nov 5 23:39:57.419174 containerd[1521]: time="2025-11-05T23:39:57.419134431Z" level=info msg="connecting to shim add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375" address="unix:///run/containerd/s/27e030222441a082ee246fb3f43a55aadefff7e3579f19dc9eef5b760d3c0b4a" protocol=ttrpc version=3 Nov 5 23:39:57.440688 systemd[1]: Started cri-containerd-add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375.scope - libcontainer container add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375. Nov 5 23:39:57.475646 containerd[1521]: time="2025-11-05T23:39:57.475588561Z" level=info msg="StartContainer for \"add028e01363d94f9329a735ce80689625c7d17c0c0c7c1cd784d41cde737375\" returns successfully" Nov 5 23:39:58.325640 kubelet[2683]: I1105 23:39:58.325537 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5qmgz" podStartSLOduration=4.3253834 podStartE2EDuration="4.3253834s" podCreationTimestamp="2025-11-05 23:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:39:56.32371843 +0000 UTC m=+7.151610574" watchObservedRunningTime="2025-11-05 23:39:58.3253834 +0000 UTC m=+9.153275544" Nov 5 23:40:02.494756 update_engine[1508]: I20251105 23:40:02.494673 1508 update_attempter.cc:509] Updating boot flags... Nov 5 23:40:02.923056 kubelet[2683]: I1105 23:40:02.922987 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mnww7" podStartSLOduration=6.447774692 podStartE2EDuration="7.92297321s" podCreationTimestamp="2025-11-05 23:39:55 +0000 UTC" firstStartedPulling="2025-11-05 23:39:55.919760625 +0000 UTC m=+6.747652769" lastFinishedPulling="2025-11-05 23:39:57.394959143 +0000 UTC m=+8.222851287" observedRunningTime="2025-11-05 23:39:58.325829129 +0000 UTC m=+9.153721273" watchObservedRunningTime="2025-11-05 23:40:02.92297321 +0000 UTC m=+13.750865354" Nov 5 23:40:03.146275 sudo[1736]: pam_unix(sudo:session): session closed for user root Nov 5 23:40:03.149329 sshd[1735]: Connection closed by 10.0.0.1 port 32956 Nov 5 23:40:03.149922 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:03.156872 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:32956.service: Deactivated successfully. Nov 5 23:40:03.158840 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 23:40:03.161477 systemd[1]: session-7.scope: Consumed 8.424s CPU time, 227.1M memory peak. Nov 5 23:40:03.162754 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Nov 5 23:40:03.165114 systemd-logind[1503]: Removed session 7. Nov 5 23:40:10.186294 systemd[1]: Created slice kubepods-besteffort-pod98b8f19c_afbf_4c86_8bf2_c45172a8849f.slice - libcontainer container kubepods-besteffort-pod98b8f19c_afbf_4c86_8bf2_c45172a8849f.slice. Nov 5 23:40:10.288221 kubelet[2683]: I1105 23:40:10.288161 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98b8f19c-afbf-4c86-8bf2-c45172a8849f-tigera-ca-bundle\") pod \"calico-typha-75d4788f67-r25b7\" (UID: \"98b8f19c-afbf-4c86-8bf2-c45172a8849f\") " pod="calico-system/calico-typha-75d4788f67-r25b7" Nov 5 23:40:10.288221 kubelet[2683]: I1105 23:40:10.288225 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/98b8f19c-afbf-4c86-8bf2-c45172a8849f-typha-certs\") pod \"calico-typha-75d4788f67-r25b7\" (UID: \"98b8f19c-afbf-4c86-8bf2-c45172a8849f\") " pod="calico-system/calico-typha-75d4788f67-r25b7" Nov 5 23:40:10.288646 kubelet[2683]: I1105 23:40:10.288261 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ltn\" (UniqueName: \"kubernetes.io/projected/98b8f19c-afbf-4c86-8bf2-c45172a8849f-kube-api-access-l4ltn\") pod \"calico-typha-75d4788f67-r25b7\" (UID: \"98b8f19c-afbf-4c86-8bf2-c45172a8849f\") " pod="calico-system/calico-typha-75d4788f67-r25b7" Nov 5 23:40:10.344155 systemd[1]: Created slice kubepods-besteffort-pod4c1a1eae_f16a_407a_99da_fd5f58db4aa9.slice - libcontainer container kubepods-besteffort-pod4c1a1eae_f16a_407a_99da_fd5f58db4aa9.slice. Nov 5 23:40:10.491195 kubelet[2683]: I1105 23:40:10.491024 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-node-certs\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491195 kubelet[2683]: I1105 23:40:10.491073 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-policysync\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491195 kubelet[2683]: I1105 23:40:10.491123 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-lib-modules\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491195 kubelet[2683]: I1105 23:40:10.491178 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-cni-log-dir\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491534 kubelet[2683]: I1105 23:40:10.491208 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcllk\" (UniqueName: \"kubernetes.io/projected/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-kube-api-access-jcllk\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491581 kubelet[2683]: I1105 23:40:10.491565 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-var-lib-calico\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491606 kubelet[2683]: I1105 23:40:10.491587 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-flexvol-driver-host\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491606 kubelet[2683]: I1105 23:40:10.491604 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-cni-bin-dir\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491606 kubelet[2683]: I1105 23:40:10.491618 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-cni-net-dir\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491724 kubelet[2683]: I1105 23:40:10.491647 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-xtables-lock\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491724 kubelet[2683]: I1105 23:40:10.491699 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-tigera-ca-bundle\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.491814 kubelet[2683]: I1105 23:40:10.491730 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4c1a1eae-f16a-407a-99da-fd5f58db4aa9-var-run-calico\") pod \"calico-node-nwvzt\" (UID: \"4c1a1eae-f16a-407a-99da-fd5f58db4aa9\") " pod="calico-system/calico-node-nwvzt" Nov 5 23:40:10.496145 containerd[1521]: time="2025-11-05T23:40:10.496111964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d4788f67-r25b7,Uid:98b8f19c-afbf-4c86-8bf2-c45172a8849f,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:10.535427 kubelet[2683]: E1105 23:40:10.535364 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:10.549307 containerd[1521]: time="2025-11-05T23:40:10.549261150Z" level=info msg="connecting to shim 80216c0528fe24fab592b7aee8c54981c0a01b0f53c52cebd3678583501d9c9c" address="unix:///run/containerd/s/704be3b79d37d55d933d4099c48f272af9971c9ec62ce71ccb6af6b0bf00a3d3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:10.593416 kubelet[2683]: E1105 23:40:10.593360 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.593631 kubelet[2683]: W1105 23:40:10.593383 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.593631 kubelet[2683]: E1105 23:40:10.593513 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.595147 systemd[1]: Started cri-containerd-80216c0528fe24fab592b7aee8c54981c0a01b0f53c52cebd3678583501d9c9c.scope - libcontainer container 80216c0528fe24fab592b7aee8c54981c0a01b0f53c52cebd3678583501d9c9c. Nov 5 23:40:10.598293 kubelet[2683]: E1105 23:40:10.598226 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.598293 kubelet[2683]: W1105 23:40:10.598244 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.598293 kubelet[2683]: E1105 23:40:10.598261 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.611062 kubelet[2683]: E1105 23:40:10.610980 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.611062 kubelet[2683]: W1105 23:40:10.611005 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.611062 kubelet[2683]: E1105 23:40:10.611023 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.651977 containerd[1521]: time="2025-11-05T23:40:10.651930817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwvzt,Uid:4c1a1eae-f16a-407a-99da-fd5f58db4aa9,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:10.673545 containerd[1521]: time="2025-11-05T23:40:10.672435821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d4788f67-r25b7,Uid:98b8f19c-afbf-4c86-8bf2-c45172a8849f,Namespace:calico-system,Attempt:0,} returns sandbox id \"80216c0528fe24fab592b7aee8c54981c0a01b0f53c52cebd3678583501d9c9c\"" Nov 5 23:40:10.680962 containerd[1521]: time="2025-11-05T23:40:10.679608866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 23:40:10.686420 containerd[1521]: time="2025-11-05T23:40:10.686336449Z" level=info msg="connecting to shim 0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d" address="unix:///run/containerd/s/8789ed075c6de33ac3b32b6b89e63caf850f3f56675889e6467e168f0b435433" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:10.693597 kubelet[2683]: E1105 23:40:10.693549 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.693824 kubelet[2683]: W1105 23:40:10.693729 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.693824 kubelet[2683]: E1105 23:40:10.693754 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.694091 kubelet[2683]: I1105 23:40:10.694074 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99b6a710-372d-4a19-969a-3a96da2ba20c-registration-dir\") pod \"csi-node-driver-kzcdk\" (UID: \"99b6a710-372d-4a19-969a-3a96da2ba20c\") " pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:10.694413 kubelet[2683]: E1105 23:40:10.694377 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.694549 kubelet[2683]: W1105 23:40:10.694495 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.694549 kubelet[2683]: E1105 23:40:10.694515 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.694900 kubelet[2683]: E1105 23:40:10.694885 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.695022 kubelet[2683]: W1105 23:40:10.694962 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.695022 kubelet[2683]: E1105 23:40:10.694989 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.695223 kubelet[2683]: I1105 23:40:10.695205 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99b6a710-372d-4a19-969a-3a96da2ba20c-socket-dir\") pod \"csi-node-driver-kzcdk\" (UID: \"99b6a710-372d-4a19-969a-3a96da2ba20c\") " pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:10.695567 kubelet[2683]: E1105 23:40:10.695505 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.695567 kubelet[2683]: W1105 23:40:10.695540 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.695567 kubelet[2683]: E1105 23:40:10.695552 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.696152 kubelet[2683]: E1105 23:40:10.696136 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.696313 kubelet[2683]: W1105 23:40:10.696220 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.696313 kubelet[2683]: E1105 23:40:10.696237 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.696313 kubelet[2683]: I1105 23:40:10.696263 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mv42\" (UniqueName: \"kubernetes.io/projected/99b6a710-372d-4a19-969a-3a96da2ba20c-kube-api-access-2mv42\") pod \"csi-node-driver-kzcdk\" (UID: \"99b6a710-372d-4a19-969a-3a96da2ba20c\") " pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:10.696753 kubelet[2683]: E1105 23:40:10.696715 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.696753 kubelet[2683]: W1105 23:40:10.696730 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.696753 kubelet[2683]: E1105 23:40:10.696741 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.696951 kubelet[2683]: I1105 23:40:10.696880 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99b6a710-372d-4a19-969a-3a96da2ba20c-varrun\") pod \"csi-node-driver-kzcdk\" (UID: \"99b6a710-372d-4a19-969a-3a96da2ba20c\") " pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:10.697353 kubelet[2683]: E1105 23:40:10.697320 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.697353 kubelet[2683]: W1105 23:40:10.697332 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.697353 kubelet[2683]: E1105 23:40:10.697342 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.697731 kubelet[2683]: E1105 23:40:10.697697 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.697731 kubelet[2683]: W1105 23:40:10.697710 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.697731 kubelet[2683]: E1105 23:40:10.697720 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.698257 kubelet[2683]: E1105 23:40:10.698240 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.698351 kubelet[2683]: W1105 23:40:10.698320 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.698351 kubelet[2683]: E1105 23:40:10.698339 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.699361 kubelet[2683]: E1105 23:40:10.699321 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.699361 kubelet[2683]: W1105 23:40:10.699336 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.699361 kubelet[2683]: E1105 23:40:10.699347 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.700644 kubelet[2683]: E1105 23:40:10.700295 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.700644 kubelet[2683]: W1105 23:40:10.700315 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.700644 kubelet[2683]: E1105 23:40:10.700326 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.701547 kubelet[2683]: I1105 23:40:10.701520 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99b6a710-372d-4a19-969a-3a96da2ba20c-kubelet-dir\") pod \"csi-node-driver-kzcdk\" (UID: \"99b6a710-372d-4a19-969a-3a96da2ba20c\") " pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:10.702966 kubelet[2683]: E1105 23:40:10.702442 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.702966 kubelet[2683]: W1105 23:40:10.702458 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.702966 kubelet[2683]: E1105 23:40:10.702484 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.703530 kubelet[2683]: E1105 23:40:10.703459 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.703530 kubelet[2683]: W1105 23:40:10.703475 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.703530 kubelet[2683]: E1105 23:40:10.703485 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.703896 kubelet[2683]: E1105 23:40:10.703878 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.703969 kubelet[2683]: W1105 23:40:10.703956 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.704047 kubelet[2683]: E1105 23:40:10.704035 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.704524 kubelet[2683]: E1105 23:40:10.704509 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.704772 kubelet[2683]: W1105 23:40:10.704627 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.704772 kubelet[2683]: E1105 23:40:10.704644 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.709620 systemd[1]: Started cri-containerd-0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d.scope - libcontainer container 0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d. Nov 5 23:40:10.736139 containerd[1521]: time="2025-11-05T23:40:10.735982336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwvzt,Uid:4c1a1eae-f16a-407a-99da-fd5f58db4aa9,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\"" Nov 5 23:40:10.804734 kubelet[2683]: E1105 23:40:10.804593 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.804734 kubelet[2683]: W1105 23:40:10.804615 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.804734 kubelet[2683]: E1105 23:40:10.804633 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.804997 kubelet[2683]: E1105 23:40:10.804983 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.805053 kubelet[2683]: W1105 23:40:10.805041 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.805113 kubelet[2683]: E1105 23:40:10.805102 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.805444 kubelet[2683]: E1105 23:40:10.805331 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.805444 kubelet[2683]: W1105 23:40:10.805342 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.805444 kubelet[2683]: E1105 23:40:10.805351 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.805600 kubelet[2683]: E1105 23:40:10.805578 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.805600 kubelet[2683]: W1105 23:40:10.805596 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.805647 kubelet[2683]: E1105 23:40:10.805609 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.805753 kubelet[2683]: E1105 23:40:10.805743 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.805777 kubelet[2683]: W1105 23:40:10.805754 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.805777 kubelet[2683]: E1105 23:40:10.805762 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.806001 kubelet[2683]: E1105 23:40:10.805989 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.806001 kubelet[2683]: W1105 23:40:10.806000 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.806070 kubelet[2683]: E1105 23:40:10.806015 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.806239 kubelet[2683]: E1105 23:40:10.806228 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.806273 kubelet[2683]: W1105 23:40:10.806238 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.806273 kubelet[2683]: E1105 23:40:10.806248 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.807038 kubelet[2683]: E1105 23:40:10.807021 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.807125 kubelet[2683]: W1105 23:40:10.807113 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.807368 kubelet[2683]: E1105 23:40:10.807349 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.808103 kubelet[2683]: E1105 23:40:10.807903 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.808103 kubelet[2683]: W1105 23:40:10.807919 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.808103 kubelet[2683]: E1105 23:40:10.807931 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.808255 kubelet[2683]: E1105 23:40:10.808242 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.808319 kubelet[2683]: W1105 23:40:10.808307 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.808488 kubelet[2683]: E1105 23:40:10.808364 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.808648 kubelet[2683]: E1105 23:40:10.808636 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.808707 kubelet[2683]: W1105 23:40:10.808696 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.808755 kubelet[2683]: E1105 23:40:10.808745 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.808956 kubelet[2683]: E1105 23:40:10.808944 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.809212 kubelet[2683]: W1105 23:40:10.809060 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.809212 kubelet[2683]: E1105 23:40:10.809080 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.809332 kubelet[2683]: E1105 23:40:10.809320 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.809381 kubelet[2683]: W1105 23:40:10.809372 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.809460 kubelet[2683]: E1105 23:40:10.809449 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.809740 kubelet[2683]: E1105 23:40:10.809669 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.809740 kubelet[2683]: W1105 23:40:10.809680 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.809740 kubelet[2683]: E1105 23:40:10.809690 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.809948 kubelet[2683]: E1105 23:40:10.809924 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.809981 kubelet[2683]: W1105 23:40:10.809950 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.809981 kubelet[2683]: E1105 23:40:10.809966 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.810135 kubelet[2683]: E1105 23:40:10.810123 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.810135 kubelet[2683]: W1105 23:40:10.810133 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.810200 kubelet[2683]: E1105 23:40:10.810142 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.810295 kubelet[2683]: E1105 23:40:10.810282 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.810295 kubelet[2683]: W1105 23:40:10.810293 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.810373 kubelet[2683]: E1105 23:40:10.810301 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.810476 kubelet[2683]: E1105 23:40:10.810463 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.810476 kubelet[2683]: W1105 23:40:10.810474 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.810550 kubelet[2683]: E1105 23:40:10.810483 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.810621 kubelet[2683]: E1105 23:40:10.810610 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.810621 kubelet[2683]: W1105 23:40:10.810619 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.810688 kubelet[2683]: E1105 23:40:10.810626 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.810745 kubelet[2683]: E1105 23:40:10.810734 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.810745 kubelet[2683]: W1105 23:40:10.810743 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.810924 kubelet[2683]: E1105 23:40:10.810750 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.811008 kubelet[2683]: E1105 23:40:10.810993 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.811058 kubelet[2683]: W1105 23:40:10.811046 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.811108 kubelet[2683]: E1105 23:40:10.811097 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.811328 kubelet[2683]: E1105 23:40:10.811309 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.811328 kubelet[2683]: W1105 23:40:10.811324 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.811466 kubelet[2683]: E1105 23:40:10.811335 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.811599 kubelet[2683]: E1105 23:40:10.811586 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.811599 kubelet[2683]: W1105 23:40:10.811598 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.811656 kubelet[2683]: E1105 23:40:10.811607 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.811780 kubelet[2683]: E1105 23:40:10.811767 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.811780 kubelet[2683]: W1105 23:40:10.811778 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.811850 kubelet[2683]: E1105 23:40:10.811787 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.811966 kubelet[2683]: E1105 23:40:10.811952 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.811966 kubelet[2683]: W1105 23:40:10.811964 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.812013 kubelet[2683]: E1105 23:40:10.811973 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:10.828843 kubelet[2683]: E1105 23:40:10.828813 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:10.828843 kubelet[2683]: W1105 23:40:10.828834 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:10.828998 kubelet[2683]: E1105 23:40:10.828855 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:11.741821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529555931.mount: Deactivated successfully. Nov 5 23:40:12.271334 kubelet[2683]: E1105 23:40:12.271264 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:12.458584 containerd[1521]: time="2025-11-05T23:40:12.458505868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:12.459480 containerd[1521]: time="2025-11-05T23:40:12.459434550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 23:40:12.460799 containerd[1521]: time="2025-11-05T23:40:12.460752889Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:12.464651 containerd[1521]: time="2025-11-05T23:40:12.464600461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:12.465586 containerd[1521]: time="2025-11-05T23:40:12.465545063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.784577728s" Nov 5 23:40:12.465586 containerd[1521]: time="2025-11-05T23:40:12.465580465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 23:40:12.466777 containerd[1521]: time="2025-11-05T23:40:12.466743277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 23:40:12.481360 containerd[1521]: time="2025-11-05T23:40:12.481299608Z" level=info msg="CreateContainer within sandbox \"80216c0528fe24fab592b7aee8c54981c0a01b0f53c52cebd3678583501d9c9c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 23:40:12.489839 containerd[1521]: time="2025-11-05T23:40:12.489607780Z" level=info msg="Container 72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:40:12.493181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595770258.mount: Deactivated successfully. Nov 5 23:40:12.524092 containerd[1521]: time="2025-11-05T23:40:12.523965278Z" level=info msg="CreateContainer within sandbox \"80216c0528fe24fab592b7aee8c54981c0a01b0f53c52cebd3678583501d9c9c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771\"" Nov 5 23:40:12.524984 containerd[1521]: time="2025-11-05T23:40:12.524947721Z" level=info msg="StartContainer for \"72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771\"" Nov 5 23:40:12.526066 containerd[1521]: time="2025-11-05T23:40:12.526015249Z" level=info msg="connecting to shim 72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771" address="unix:///run/containerd/s/704be3b79d37d55d933d4099c48f272af9971c9ec62ce71ccb6af6b0bf00a3d3" protocol=ttrpc version=3 Nov 5 23:40:12.549664 systemd[1]: Started cri-containerd-72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771.scope - libcontainer container 72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771. Nov 5 23:40:12.590090 containerd[1521]: time="2025-11-05T23:40:12.590036314Z" level=info msg="StartContainer for \"72c7c7f4b8f39dd337dea9a68f67b2da0e6dc6594cf3df217aed87ba1557c771\" returns successfully" Nov 5 23:40:13.388772 kubelet[2683]: I1105 23:40:13.388699 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75d4788f67-r25b7" podStartSLOduration=1.601290662 podStartE2EDuration="3.388680446s" podCreationTimestamp="2025-11-05 23:40:10 +0000 UTC" firstStartedPulling="2025-11-05 23:40:10.679248688 +0000 UTC m=+21.507140832" lastFinishedPulling="2025-11-05 23:40:12.466638472 +0000 UTC m=+23.294530616" observedRunningTime="2025-11-05 23:40:13.387784168 +0000 UTC m=+24.215676312" watchObservedRunningTime="2025-11-05 23:40:13.388680446 +0000 UTC m=+24.216572710" Nov 5 23:40:13.414567 kubelet[2683]: E1105 23:40:13.414527 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.414567 kubelet[2683]: W1105 23:40:13.414554 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.414567 kubelet[2683]: E1105 23:40:13.414575 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.414995 kubelet[2683]: E1105 23:40:13.414861 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.416446 kubelet[2683]: W1105 23:40:13.414869 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.416528 kubelet[2683]: E1105 23:40:13.416460 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.417014 kubelet[2683]: E1105 23:40:13.416829 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.417014 kubelet[2683]: W1105 23:40:13.416845 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.417014 kubelet[2683]: E1105 23:40:13.416858 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.417225 kubelet[2683]: E1105 23:40:13.417052 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.417225 kubelet[2683]: W1105 23:40:13.417060 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.417225 kubelet[2683]: E1105 23:40:13.417069 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.417316 kubelet[2683]: E1105 23:40:13.417230 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.417316 kubelet[2683]: W1105 23:40:13.417237 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.417316 kubelet[2683]: E1105 23:40:13.417252 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.418056 kubelet[2683]: E1105 23:40:13.417820 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.418056 kubelet[2683]: W1105 23:40:13.417832 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.418056 kubelet[2683]: E1105 23:40:13.417844 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.419038 kubelet[2683]: E1105 23:40:13.418183 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.419038 kubelet[2683]: W1105 23:40:13.418194 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.419038 kubelet[2683]: E1105 23:40:13.418204 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.419038 kubelet[2683]: E1105 23:40:13.418689 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.419038 kubelet[2683]: W1105 23:40:13.418701 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.419038 kubelet[2683]: E1105 23:40:13.418714 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.419038 kubelet[2683]: E1105 23:40:13.418905 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.419038 kubelet[2683]: W1105 23:40:13.418913 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.419038 kubelet[2683]: E1105 23:40:13.418923 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.419221 kubelet[2683]: E1105 23:40:13.419077 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.419221 kubelet[2683]: W1105 23:40:13.419084 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.419221 kubelet[2683]: E1105 23:40:13.419092 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.419221 kubelet[2683]: E1105 23:40:13.419201 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.419221 kubelet[2683]: W1105 23:40:13.419207 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.419221 kubelet[2683]: E1105 23:40:13.419214 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.419460 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.420603 kubelet[2683]: W1105 23:40:13.419474 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.419485 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.419650 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.420603 kubelet[2683]: W1105 23:40:13.419657 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.419665 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.419831 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.420603 kubelet[2683]: W1105 23:40:13.419838 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.419845 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.420603 kubelet[2683]: E1105 23:40:13.420028 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.421841 kubelet[2683]: W1105 23:40:13.420035 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.421841 kubelet[2683]: E1105 23:40:13.420042 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.426786 kubelet[2683]: E1105 23:40:13.426709 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.426786 kubelet[2683]: W1105 23:40:13.426732 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.426786 kubelet[2683]: E1105 23:40:13.426753 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.427665 kubelet[2683]: E1105 23:40:13.427497 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.427665 kubelet[2683]: W1105 23:40:13.427515 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.427665 kubelet[2683]: E1105 23:40:13.427529 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.427933 kubelet[2683]: E1105 23:40:13.427908 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.427985 kubelet[2683]: W1105 23:40:13.427926 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.427985 kubelet[2683]: E1105 23:40:13.427959 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.428250 kubelet[2683]: E1105 23:40:13.428205 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.428250 kubelet[2683]: W1105 23:40:13.428215 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.428250 kubelet[2683]: E1105 23:40:13.428224 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.428425 kubelet[2683]: E1105 23:40:13.428411 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.428425 kubelet[2683]: W1105 23:40:13.428423 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.428483 kubelet[2683]: E1105 23:40:13.428432 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.428939 kubelet[2683]: E1105 23:40:13.428917 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.428939 kubelet[2683]: W1105 23:40:13.428938 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.428939 kubelet[2683]: E1105 23:40:13.428951 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.429181 kubelet[2683]: E1105 23:40:13.429136 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.429181 kubelet[2683]: W1105 23:40:13.429146 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.429181 kubelet[2683]: E1105 23:40:13.429156 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.429384 kubelet[2683]: E1105 23:40:13.429284 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.429384 kubelet[2683]: W1105 23:40:13.429291 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.429384 kubelet[2683]: E1105 23:40:13.429299 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.429519 kubelet[2683]: E1105 23:40:13.429503 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.429519 kubelet[2683]: W1105 23:40:13.429516 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.429569 kubelet[2683]: E1105 23:40:13.429525 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.429773 kubelet[2683]: E1105 23:40:13.429689 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.429773 kubelet[2683]: W1105 23:40:13.429706 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.429773 kubelet[2683]: E1105 23:40:13.429713 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.429876 kubelet[2683]: E1105 23:40:13.429834 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.429876 kubelet[2683]: W1105 23:40:13.429841 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.429876 kubelet[2683]: E1105 23:40:13.429849 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.430126 kubelet[2683]: E1105 23:40:13.430021 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.430126 kubelet[2683]: W1105 23:40:13.430030 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.430126 kubelet[2683]: E1105 23:40:13.430040 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.430622 kubelet[2683]: E1105 23:40:13.430564 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.430622 kubelet[2683]: W1105 23:40:13.430603 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.430622 kubelet[2683]: E1105 23:40:13.430618 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.430928 kubelet[2683]: E1105 23:40:13.430832 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.430928 kubelet[2683]: W1105 23:40:13.430846 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.430928 kubelet[2683]: E1105 23:40:13.430857 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.431381 kubelet[2683]: E1105 23:40:13.431096 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.431381 kubelet[2683]: W1105 23:40:13.431107 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.431381 kubelet[2683]: E1105 23:40:13.431118 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.431381 kubelet[2683]: E1105 23:40:13.431320 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.431381 kubelet[2683]: W1105 23:40:13.431330 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.431381 kubelet[2683]: E1105 23:40:13.431352 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.436379 kubelet[2683]: E1105 23:40:13.436302 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.436379 kubelet[2683]: W1105 23:40:13.436324 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.436379 kubelet[2683]: E1105 23:40:13.436342 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.437013 kubelet[2683]: E1105 23:40:13.436957 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:40:13.437013 kubelet[2683]: W1105 23:40:13.436971 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:40:13.437013 kubelet[2683]: E1105 23:40:13.436983 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:40:13.758928 containerd[1521]: time="2025-11-05T23:40:13.758680128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:13.760716 containerd[1521]: time="2025-11-05T23:40:13.759460560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 23:40:13.761084 containerd[1521]: time="2025-11-05T23:40:13.761031306Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:13.763584 containerd[1521]: time="2025-11-05T23:40:13.763541291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:13.764134 containerd[1521]: time="2025-11-05T23:40:13.764092155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.297312636s" Nov 5 23:40:13.764134 containerd[1521]: time="2025-11-05T23:40:13.764129916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 23:40:13.768272 containerd[1521]: time="2025-11-05T23:40:13.767877193Z" level=info msg="CreateContainer within sandbox \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 23:40:13.788589 containerd[1521]: time="2025-11-05T23:40:13.787759827Z" level=info msg="Container 97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:40:13.795456 containerd[1521]: time="2025-11-05T23:40:13.795402308Z" level=info msg="CreateContainer within sandbox \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\"" Nov 5 23:40:13.795976 containerd[1521]: time="2025-11-05T23:40:13.795954091Z" level=info msg="StartContainer for \"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\"" Nov 5 23:40:13.798442 containerd[1521]: time="2025-11-05T23:40:13.797417833Z" level=info msg="connecting to shim 97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561" address="unix:///run/containerd/s/8789ed075c6de33ac3b32b6b89e63caf850f3f56675889e6467e168f0b435433" protocol=ttrpc version=3 Nov 5 23:40:13.828607 systemd[1]: Started cri-containerd-97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561.scope - libcontainer container 97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561. Nov 5 23:40:13.880238 systemd[1]: cri-containerd-97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561.scope: Deactivated successfully. Nov 5 23:40:13.885730 containerd[1521]: time="2025-11-05T23:40:13.885692416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\" id:\"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\" pid:3369 exited_at:{seconds:1762386013 nanos:885190315}" Nov 5 23:40:13.890960 containerd[1521]: time="2025-11-05T23:40:13.890929676Z" level=info msg="StartContainer for \"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\" returns successfully" Nov 5 23:40:13.893957 containerd[1521]: time="2025-11-05T23:40:13.893773235Z" level=info msg="received exit event container_id:\"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\" id:\"97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561\" pid:3369 exited_at:{seconds:1762386013 nanos:885190315}" Nov 5 23:40:13.930524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97bdb6fae381364ae57303b736d1dc06ae5203a2600052f17fc5b1e361d49561-rootfs.mount: Deactivated successfully. Nov 5 23:40:14.271167 kubelet[2683]: E1105 23:40:14.271112 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:14.376933 containerd[1521]: time="2025-11-05T23:40:14.376893035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 23:40:16.272048 kubelet[2683]: E1105 23:40:16.271996 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:17.579078 containerd[1521]: time="2025-11-05T23:40:17.579013346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:17.580435 containerd[1521]: time="2025-11-05T23:40:17.580359950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 23:40:17.582078 containerd[1521]: time="2025-11-05T23:40:17.581807277Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:17.583959 containerd[1521]: time="2025-11-05T23:40:17.583917545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:17.584775 containerd[1521]: time="2025-11-05T23:40:17.584746212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.207810056s" Nov 5 23:40:17.584870 containerd[1521]: time="2025-11-05T23:40:17.584855296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 23:40:17.589769 containerd[1521]: time="2025-11-05T23:40:17.589692252Z" level=info msg="CreateContainer within sandbox \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 23:40:17.616846 containerd[1521]: time="2025-11-05T23:40:17.616769290Z" level=info msg="Container e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:40:17.617524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532835072.mount: Deactivated successfully. Nov 5 23:40:17.625095 containerd[1521]: time="2025-11-05T23:40:17.625030078Z" level=info msg="CreateContainer within sandbox \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\"" Nov 5 23:40:17.625978 containerd[1521]: time="2025-11-05T23:40:17.625895546Z" level=info msg="StartContainer for \"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\"" Nov 5 23:40:17.627682 containerd[1521]: time="2025-11-05T23:40:17.627649243Z" level=info msg="connecting to shim e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7" address="unix:///run/containerd/s/8789ed075c6de33ac3b32b6b89e63caf850f3f56675889e6467e168f0b435433" protocol=ttrpc version=3 Nov 5 23:40:17.650611 systemd[1]: Started cri-containerd-e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7.scope - libcontainer container e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7. Nov 5 23:40:17.687190 containerd[1521]: time="2025-11-05T23:40:17.687091489Z" level=info msg="StartContainer for \"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\" returns successfully" Nov 5 23:40:18.256568 systemd[1]: cri-containerd-e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7.scope: Deactivated successfully. Nov 5 23:40:18.256927 systemd[1]: cri-containerd-e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7.scope: Consumed 458ms CPU time, 177.7M memory peak, 2.6M read from disk, 165.9M written to disk. Nov 5 23:40:18.258887 containerd[1521]: time="2025-11-05T23:40:18.258846096Z" level=info msg="received exit event container_id:\"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\" id:\"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\" pid:3430 exited_at:{seconds:1762386018 nanos:258646010}" Nov 5 23:40:18.259251 containerd[1521]: time="2025-11-05T23:40:18.259221547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\" id:\"e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7\" pid:3430 exited_at:{seconds:1762386018 nanos:258646010}" Nov 5 23:40:18.271382 kubelet[2683]: E1105 23:40:18.271339 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:18.273488 kubelet[2683]: I1105 23:40:18.273460 2683 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 23:40:18.288035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e65694cf77f62038ff83f284bb3d26e32131611611929108c69e8b009d9b55c7-rootfs.mount: Deactivated successfully. Nov 5 23:40:18.355959 systemd[1]: Created slice kubepods-besteffort-pod25336bb2_bdfb_4d75_81ae_8315253dce4b.slice - libcontainer container kubepods-besteffort-pod25336bb2_bdfb_4d75_81ae_8315253dce4b.slice. Nov 5 23:40:18.365643 kubelet[2683]: I1105 23:40:18.365598 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43750443-b18a-48ac-8d08-d0590a9d536c-config-volume\") pod \"coredns-674b8bbfcf-4rxrw\" (UID: \"43750443-b18a-48ac-8d08-d0590a9d536c\") " pod="kube-system/coredns-674b8bbfcf-4rxrw" Nov 5 23:40:18.365788 kubelet[2683]: I1105 23:40:18.365649 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25336bb2-bdfb-4d75-81ae-8315253dce4b-calico-apiserver-certs\") pod \"calico-apiserver-6b5dd884c4-dpl4b\" (UID: \"25336bb2-bdfb-4d75-81ae-8315253dce4b\") " pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" Nov 5 23:40:18.365788 kubelet[2683]: I1105 23:40:18.365668 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9bpb\" (UniqueName: \"kubernetes.io/projected/b316df6c-d77d-45e2-a17b-32be21557bd5-kube-api-access-k9bpb\") pod \"calico-kube-controllers-58d449f5dd-whvqx\" (UID: \"b316df6c-d77d-45e2-a17b-32be21557bd5\") " pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" Nov 5 23:40:18.365788 kubelet[2683]: I1105 23:40:18.365686 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zllw\" (UniqueName: \"kubernetes.io/projected/25336bb2-bdfb-4d75-81ae-8315253dce4b-kube-api-access-5zllw\") pod \"calico-apiserver-6b5dd884c4-dpl4b\" (UID: \"25336bb2-bdfb-4d75-81ae-8315253dce4b\") " pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" Nov 5 23:40:18.365788 kubelet[2683]: I1105 23:40:18.365701 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8kz\" (UniqueName: \"kubernetes.io/projected/43750443-b18a-48ac-8d08-d0590a9d536c-kube-api-access-9w8kz\") pod \"coredns-674b8bbfcf-4rxrw\" (UID: \"43750443-b18a-48ac-8d08-d0590a9d536c\") " pod="kube-system/coredns-674b8bbfcf-4rxrw" Nov 5 23:40:18.365788 kubelet[2683]: I1105 23:40:18.365722 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b316df6c-d77d-45e2-a17b-32be21557bd5-tigera-ca-bundle\") pod \"calico-kube-controllers-58d449f5dd-whvqx\" (UID: \"b316df6c-d77d-45e2-a17b-32be21557bd5\") " pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" Nov 5 23:40:18.366500 systemd[1]: Created slice kubepods-besteffort-podb316df6c_d77d_45e2_a17b_32be21557bd5.slice - libcontainer container kubepods-besteffort-podb316df6c_d77d_45e2_a17b_32be21557bd5.slice. Nov 5 23:40:18.373662 systemd[1]: Created slice kubepods-burstable-pod43750443_b18a_48ac_8d08_d0590a9d536c.slice - libcontainer container kubepods-burstable-pod43750443_b18a_48ac_8d08_d0590a9d536c.slice. Nov 5 23:40:18.386026 systemd[1]: Created slice kubepods-besteffort-pod212ccd9f_8d71_41e6_a6d1_799624e298a9.slice - libcontainer container kubepods-besteffort-pod212ccd9f_8d71_41e6_a6d1_799624e298a9.slice. Nov 5 23:40:18.390492 containerd[1521]: time="2025-11-05T23:40:18.390452494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 23:40:18.400378 systemd[1]: Created slice kubepods-besteffort-podf4a9be9c_3557_49ca_8c15_d14ae7737d6b.slice - libcontainer container kubepods-besteffort-podf4a9be9c_3557_49ca_8c15_d14ae7737d6b.slice. Nov 5 23:40:18.405797 systemd[1]: Created slice kubepods-besteffort-pod17800fb7_e812_49ed_8d87_f115a699b129.slice - libcontainer container kubepods-besteffort-pod17800fb7_e812_49ed_8d87_f115a699b129.slice. Nov 5 23:40:18.412429 systemd[1]: Created slice kubepods-burstable-podfd51956a_08f9_406d_bffa_5b4a2fd29274.slice - libcontainer container kubepods-burstable-podfd51956a_08f9_406d_bffa_5b4a2fd29274.slice. Nov 5 23:40:18.567354 kubelet[2683]: I1105 23:40:18.567300 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17800fb7-e812-49ed-8d87-f115a699b129-config\") pod \"goldmane-666569f655-wrn7f\" (UID: \"17800fb7-e812-49ed-8d87-f115a699b129\") " pod="calico-system/goldmane-666569f655-wrn7f" Nov 5 23:40:18.567520 kubelet[2683]: I1105 23:40:18.567364 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzw8\" (UniqueName: \"kubernetes.io/projected/212ccd9f-8d71-41e6-a6d1-799624e298a9-kube-api-access-mlzw8\") pod \"calico-apiserver-6b5dd884c4-fbmq2\" (UID: \"212ccd9f-8d71-41e6-a6d1-799624e298a9\") " pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" Nov 5 23:40:18.567520 kubelet[2683]: I1105 23:40:18.567387 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmb9\" (UniqueName: \"kubernetes.io/projected/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-kube-api-access-xqmb9\") pod \"whisker-695dd765c9-zlqm4\" (UID: \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\") " pod="calico-system/whisker-695dd765c9-zlqm4" Nov 5 23:40:18.567520 kubelet[2683]: I1105 23:40:18.567414 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-ca-bundle\") pod \"whisker-695dd765c9-zlqm4\" (UID: \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\") " pod="calico-system/whisker-695dd765c9-zlqm4" Nov 5 23:40:18.567520 kubelet[2683]: I1105 23:40:18.567434 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/17800fb7-e812-49ed-8d87-f115a699b129-goldmane-key-pair\") pod \"goldmane-666569f655-wrn7f\" (UID: \"17800fb7-e812-49ed-8d87-f115a699b129\") " pod="calico-system/goldmane-666569f655-wrn7f" Nov 5 23:40:18.567788 kubelet[2683]: I1105 23:40:18.567741 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17800fb7-e812-49ed-8d87-f115a699b129-goldmane-ca-bundle\") pod \"goldmane-666569f655-wrn7f\" (UID: \"17800fb7-e812-49ed-8d87-f115a699b129\") " pod="calico-system/goldmane-666569f655-wrn7f" Nov 5 23:40:18.567788 kubelet[2683]: I1105 23:40:18.567780 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znqq\" (UniqueName: \"kubernetes.io/projected/fd51956a-08f9-406d-bffa-5b4a2fd29274-kube-api-access-4znqq\") pod \"coredns-674b8bbfcf-ddqx2\" (UID: \"fd51956a-08f9-406d-bffa-5b4a2fd29274\") " pod="kube-system/coredns-674b8bbfcf-ddqx2" Nov 5 23:40:18.567855 kubelet[2683]: I1105 23:40:18.567813 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q466l\" (UniqueName: \"kubernetes.io/projected/17800fb7-e812-49ed-8d87-f115a699b129-kube-api-access-q466l\") pod \"goldmane-666569f655-wrn7f\" (UID: \"17800fb7-e812-49ed-8d87-f115a699b129\") " pod="calico-system/goldmane-666569f655-wrn7f" Nov 5 23:40:18.567855 kubelet[2683]: I1105 23:40:18.567832 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd51956a-08f9-406d-bffa-5b4a2fd29274-config-volume\") pod \"coredns-674b8bbfcf-ddqx2\" (UID: \"fd51956a-08f9-406d-bffa-5b4a2fd29274\") " pod="kube-system/coredns-674b8bbfcf-ddqx2" Nov 5 23:40:18.567903 kubelet[2683]: I1105 23:40:18.567870 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/212ccd9f-8d71-41e6-a6d1-799624e298a9-calico-apiserver-certs\") pod \"calico-apiserver-6b5dd884c4-fbmq2\" (UID: \"212ccd9f-8d71-41e6-a6d1-799624e298a9\") " pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" Nov 5 23:40:18.567903 kubelet[2683]: I1105 23:40:18.567889 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-backend-key-pair\") pod \"whisker-695dd765c9-zlqm4\" (UID: \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\") " pod="calico-system/whisker-695dd765c9-zlqm4" Nov 5 23:40:18.666339 containerd[1521]: time="2025-11-05T23:40:18.666094589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-dpl4b,Uid:25336bb2-bdfb-4d75-81ae-8315253dce4b,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:40:18.675131 containerd[1521]: time="2025-11-05T23:40:18.673592736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d449f5dd-whvqx,Uid:b316df6c-d77d-45e2-a17b-32be21557bd5,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:18.684412 containerd[1521]: time="2025-11-05T23:40:18.683935571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxrw,Uid:43750443-b18a-48ac-8d08-d0590a9d536c,Namespace:kube-system,Attempt:0,}" Nov 5 23:40:18.698123 containerd[1521]: time="2025-11-05T23:40:18.698067440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-fbmq2,Uid:212ccd9f-8d71-41e6-a6d1-799624e298a9,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:40:18.704669 containerd[1521]: time="2025-11-05T23:40:18.704508396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-695dd765c9-zlqm4,Uid:f4a9be9c-3557-49ca-8c15-d14ae7737d6b,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:18.710684 containerd[1521]: time="2025-11-05T23:40:18.710642422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrn7f,Uid:17800fb7-e812-49ed-8d87-f115a699b129,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:18.717674 containerd[1521]: time="2025-11-05T23:40:18.717633714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddqx2,Uid:fd51956a-08f9-406d-bffa-5b4a2fd29274,Namespace:kube-system,Attempt:0,}" Nov 5 23:40:18.803369 containerd[1521]: time="2025-11-05T23:40:18.803317398Z" level=error msg="Failed to destroy network for sandbox \"8de62a915c16c0bddd9c9cd97b5a1cef8746a18bfe35ed3a4c191a74dd90c3aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.804500 containerd[1521]: time="2025-11-05T23:40:18.804461112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-dpl4b,Uid:25336bb2-bdfb-4d75-81ae-8315253dce4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de62a915c16c0bddd9c9cd97b5a1cef8746a18bfe35ed3a4c191a74dd90c3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.811499 kubelet[2683]: E1105 23:40:18.811444 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de62a915c16c0bddd9c9cd97b5a1cef8746a18bfe35ed3a4c191a74dd90c3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.811642 kubelet[2683]: E1105 23:40:18.811539 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de62a915c16c0bddd9c9cd97b5a1cef8746a18bfe35ed3a4c191a74dd90c3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" Nov 5 23:40:18.811642 kubelet[2683]: E1105 23:40:18.811559 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de62a915c16c0bddd9c9cd97b5a1cef8746a18bfe35ed3a4c191a74dd90c3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" Nov 5 23:40:18.811642 kubelet[2683]: E1105 23:40:18.811617 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5dd884c4-dpl4b_calico-apiserver(25336bb2-bdfb-4d75-81ae-8315253dce4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5dd884c4-dpl4b_calico-apiserver(25336bb2-bdfb-4d75-81ae-8315253dce4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8de62a915c16c0bddd9c9cd97b5a1cef8746a18bfe35ed3a4c191a74dd90c3aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b" Nov 5 23:40:18.823648 containerd[1521]: time="2025-11-05T23:40:18.823517731Z" level=error msg="Failed to destroy network for sandbox \"6d02e31503fe630dc57af5362436ee8be79bbdbb1e7cf5d8b99815c9da53ed80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.825085 containerd[1521]: time="2025-11-05T23:40:18.825015817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d449f5dd-whvqx,Uid:b316df6c-d77d-45e2-a17b-32be21557bd5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d02e31503fe630dc57af5362436ee8be79bbdbb1e7cf5d8b99815c9da53ed80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.826009 kubelet[2683]: E1105 23:40:18.825960 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d02e31503fe630dc57af5362436ee8be79bbdbb1e7cf5d8b99815c9da53ed80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.826098 kubelet[2683]: E1105 23:40:18.826026 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d02e31503fe630dc57af5362436ee8be79bbdbb1e7cf5d8b99815c9da53ed80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" Nov 5 23:40:18.826098 kubelet[2683]: E1105 23:40:18.826047 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d02e31503fe630dc57af5362436ee8be79bbdbb1e7cf5d8b99815c9da53ed80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" Nov 5 23:40:18.826151 kubelet[2683]: E1105 23:40:18.826098 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58d449f5dd-whvqx_calico-system(b316df6c-d77d-45e2-a17b-32be21557bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58d449f5dd-whvqx_calico-system(b316df6c-d77d-45e2-a17b-32be21557bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d02e31503fe630dc57af5362436ee8be79bbdbb1e7cf5d8b99815c9da53ed80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:40:18.834581 containerd[1521]: time="2025-11-05T23:40:18.834535826Z" level=error msg="Failed to destroy network for sandbox \"4af618d39b61243de40e4996413ef1874ff9610e2248ba82ba7adcf237faa1f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.836092 containerd[1521]: time="2025-11-05T23:40:18.836043872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxrw,Uid:43750443-b18a-48ac-8d08-d0590a9d536c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af618d39b61243de40e4996413ef1874ff9610e2248ba82ba7adcf237faa1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.836346 kubelet[2683]: E1105 23:40:18.836292 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af618d39b61243de40e4996413ef1874ff9610e2248ba82ba7adcf237faa1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.836418 kubelet[2683]: E1105 23:40:18.836365 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af618d39b61243de40e4996413ef1874ff9610e2248ba82ba7adcf237faa1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rxrw" Nov 5 23:40:18.836418 kubelet[2683]: E1105 23:40:18.836385 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af618d39b61243de40e4996413ef1874ff9610e2248ba82ba7adcf237faa1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rxrw" Nov 5 23:40:18.836530 kubelet[2683]: E1105 23:40:18.836450 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4rxrw_kube-system(43750443-b18a-48ac-8d08-d0590a9d536c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4rxrw_kube-system(43750443-b18a-48ac-8d08-d0590a9d536c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4af618d39b61243de40e4996413ef1874ff9610e2248ba82ba7adcf237faa1f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4rxrw" podUID="43750443-b18a-48ac-8d08-d0590a9d536c" Nov 5 23:40:18.839548 containerd[1521]: time="2025-11-05T23:40:18.839515777Z" level=error msg="Failed to destroy network for sandbox \"7fa0e943e85259d3bf284299efaec706be67762d428400d10f8851f391cb87fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.844765 containerd[1521]: time="2025-11-05T23:40:18.844692455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-fbmq2,Uid:212ccd9f-8d71-41e6-a6d1-799624e298a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa0e943e85259d3bf284299efaec706be67762d428400d10f8851f391cb87fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.844897 containerd[1521]: time="2025-11-05T23:40:18.844823779Z" level=error msg="Failed to destroy network for sandbox \"b8c9f04e663e8da2f139479cb01baa4e2fcd6ececf134c7fa22f6fd54bcf58dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.844970 kubelet[2683]: E1105 23:40:18.844931 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa0e943e85259d3bf284299efaec706be67762d428400d10f8851f391cb87fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.845022 kubelet[2683]: E1105 23:40:18.844988 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa0e943e85259d3bf284299efaec706be67762d428400d10f8851f391cb87fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" Nov 5 23:40:18.845022 kubelet[2683]: E1105 23:40:18.845012 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa0e943e85259d3bf284299efaec706be67762d428400d10f8851f391cb87fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" Nov 5 23:40:18.845088 kubelet[2683]: E1105 23:40:18.845064 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5dd884c4-fbmq2_calico-apiserver(212ccd9f-8d71-41e6-a6d1-799624e298a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5dd884c4-fbmq2_calico-apiserver(212ccd9f-8d71-41e6-a6d1-799624e298a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fa0e943e85259d3bf284299efaec706be67762d428400d10f8851f391cb87fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:40:18.848336 containerd[1521]: time="2025-11-05T23:40:18.848298444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-695dd765c9-zlqm4,Uid:f4a9be9c-3557-49ca-8c15-d14ae7737d6b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c9f04e663e8da2f139479cb01baa4e2fcd6ececf134c7fa22f6fd54bcf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.848536 kubelet[2683]: E1105 23:40:18.848503 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c9f04e663e8da2f139479cb01baa4e2fcd6ececf134c7fa22f6fd54bcf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.848573 kubelet[2683]: E1105 23:40:18.848546 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c9f04e663e8da2f139479cb01baa4e2fcd6ececf134c7fa22f6fd54bcf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-695dd765c9-zlqm4" Nov 5 23:40:18.848573 kubelet[2683]: E1105 23:40:18.848566 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c9f04e663e8da2f139479cb01baa4e2fcd6ececf134c7fa22f6fd54bcf58dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-695dd765c9-zlqm4" Nov 5 23:40:18.849311 kubelet[2683]: E1105 23:40:18.848601 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-695dd765c9-zlqm4_calico-system(f4a9be9c-3557-49ca-8c15-d14ae7737d6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-695dd765c9-zlqm4_calico-system(f4a9be9c-3557-49ca-8c15-d14ae7737d6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8c9f04e663e8da2f139479cb01baa4e2fcd6ececf134c7fa22f6fd54bcf58dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-695dd765c9-zlqm4" podUID="f4a9be9c-3557-49ca-8c15-d14ae7737d6b" Nov 5 23:40:18.850229 containerd[1521]: time="2025-11-05T23:40:18.850115819Z" level=error msg="Failed to destroy network for sandbox \"3a70fed70f5208608c4dd2ab0cf9bb75370cd9eff6d77637cb7791449567c7d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.850294 containerd[1521]: time="2025-11-05T23:40:18.850270904Z" level=error msg="Failed to destroy network for sandbox \"0513c6ea9531506a09bc353d99ce7eddd97994aa0ae540e82071ab977be5d2ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.851279 containerd[1521]: time="2025-11-05T23:40:18.851193612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrn7f,Uid:17800fb7-e812-49ed-8d87-f115a699b129,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a70fed70f5208608c4dd2ab0cf9bb75370cd9eff6d77637cb7791449567c7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.851564 kubelet[2683]: E1105 23:40:18.851537 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a70fed70f5208608c4dd2ab0cf9bb75370cd9eff6d77637cb7791449567c7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.851635 kubelet[2683]: E1105 23:40:18.851578 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a70fed70f5208608c4dd2ab0cf9bb75370cd9eff6d77637cb7791449567c7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wrn7f" Nov 5 23:40:18.851635 kubelet[2683]: E1105 23:40:18.851593 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a70fed70f5208608c4dd2ab0cf9bb75370cd9eff6d77637cb7791449567c7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wrn7f" Nov 5 23:40:18.851716 kubelet[2683]: E1105 23:40:18.851648 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wrn7f_calico-system(17800fb7-e812-49ed-8d87-f115a699b129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wrn7f_calico-system(17800fb7-e812-49ed-8d87-f115a699b129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a70fed70f5208608c4dd2ab0cf9bb75370cd9eff6d77637cb7791449567c7d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:40:18.852027 containerd[1521]: time="2025-11-05T23:40:18.851997717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddqx2,Uid:fd51956a-08f9-406d-bffa-5b4a2fd29274,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0513c6ea9531506a09bc353d99ce7eddd97994aa0ae540e82071ab977be5d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.852148 kubelet[2683]: E1105 23:40:18.852126 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0513c6ea9531506a09bc353d99ce7eddd97994aa0ae540e82071ab977be5d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:18.852194 kubelet[2683]: E1105 23:40:18.852166 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0513c6ea9531506a09bc353d99ce7eddd97994aa0ae540e82071ab977be5d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddqx2" Nov 5 23:40:18.852194 kubelet[2683]: E1105 23:40:18.852182 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0513c6ea9531506a09bc353d99ce7eddd97994aa0ae540e82071ab977be5d2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddqx2" Nov 5 23:40:18.852249 kubelet[2683]: E1105 23:40:18.852226 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ddqx2_kube-system(fd51956a-08f9-406d-bffa-5b4a2fd29274)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ddqx2_kube-system(fd51956a-08f9-406d-bffa-5b4a2fd29274)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0513c6ea9531506a09bc353d99ce7eddd97994aa0ae540e82071ab977be5d2ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ddqx2" podUID="fd51956a-08f9-406d-bffa-5b4a2fd29274" Nov 5 23:40:19.617784 systemd[1]: run-netns-cni\x2ddbb55e02\x2d5390\x2d6bf7\x2d57d8\x2d315b219fa6f4.mount: Deactivated successfully. Nov 5 23:40:20.281921 systemd[1]: Created slice kubepods-besteffort-pod99b6a710_372d_4a19_969a_3a96da2ba20c.slice - libcontainer container kubepods-besteffort-pod99b6a710_372d_4a19_969a_3a96da2ba20c.slice. Nov 5 23:40:20.287016 containerd[1521]: time="2025-11-05T23:40:20.286975442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzcdk,Uid:99b6a710-372d-4a19-969a-3a96da2ba20c,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:20.337314 containerd[1521]: time="2025-11-05T23:40:20.337259705Z" level=error msg="Failed to destroy network for sandbox \"34a28c3a05374bcb9c4992252c94e27d35b641916ae3171139688e5187011673\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:20.339074 systemd[1]: run-netns-cni\x2d621e050a\x2d3b19\x2d223e\x2de7a6\x2d283cfd3501aa.mount: Deactivated successfully. Nov 5 23:40:20.341444 kubelet[2683]: E1105 23:40:20.339261 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a28c3a05374bcb9c4992252c94e27d35b641916ae3171139688e5187011673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:20.341444 kubelet[2683]: E1105 23:40:20.339318 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a28c3a05374bcb9c4992252c94e27d35b641916ae3171139688e5187011673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:20.341444 kubelet[2683]: E1105 23:40:20.339381 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a28c3a05374bcb9c4992252c94e27d35b641916ae3171139688e5187011673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kzcdk" Nov 5 23:40:20.341740 containerd[1521]: time="2025-11-05T23:40:20.339053833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzcdk,Uid:99b6a710-372d-4a19-969a-3a96da2ba20c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a28c3a05374bcb9c4992252c94e27d35b641916ae3171139688e5187011673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:40:20.341801 kubelet[2683]: E1105 23:40:20.339626 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34a28c3a05374bcb9c4992252c94e27d35b641916ae3171139688e5187011673\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:22.617724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151441372.mount: Deactivated successfully. Nov 5 23:40:22.828128 containerd[1521]: time="2025-11-05T23:40:22.828062637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:22.828700 containerd[1521]: time="2025-11-05T23:40:22.828664201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 23:40:22.830110 containerd[1521]: time="2025-11-05T23:40:22.830065011Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:22.832338 containerd[1521]: time="2025-11-05T23:40:22.832310148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:40:22.832930 containerd[1521]: time="2025-11-05T23:40:22.832897152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.442407017s" Nov 5 23:40:22.832930 containerd[1521]: time="2025-11-05T23:40:22.832928352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 23:40:22.850568 containerd[1521]: time="2025-11-05T23:40:22.850517757Z" level=info msg="CreateContainer within sandbox \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 23:40:22.859428 containerd[1521]: time="2025-11-05T23:40:22.859149619Z" level=info msg="Container d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:40:22.871847 containerd[1521]: time="2025-11-05T23:40:22.871497067Z" level=info msg="CreateContainer within sandbox \"0e85d8c4e782add68fa0234fad77847b40b8f1d4e04861ab66a662d13657c06d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\"" Nov 5 23:40:22.873500 containerd[1521]: time="2025-11-05T23:40:22.872142712Z" level=info msg="StartContainer for \"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\"" Nov 5 23:40:22.873791 containerd[1521]: time="2025-11-05T23:40:22.873761083Z" level=info msg="connecting to shim d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83" address="unix:///run/containerd/s/8789ed075c6de33ac3b32b6b89e63caf850f3f56675889e6467e168f0b435433" protocol=ttrpc version=3 Nov 5 23:40:22.931621 systemd[1]: Started cri-containerd-d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83.scope - libcontainer container d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83. Nov 5 23:40:22.987161 containerd[1521]: time="2025-11-05T23:40:22.987044092Z" level=info msg="StartContainer for \"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\" returns successfully" Nov 5 23:40:23.099448 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 23:40:23.099595 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 23:40:23.403129 kubelet[2683]: I1105 23:40:23.402431 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-ca-bundle\") pod \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\" (UID: \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\") " Nov 5 23:40:23.403129 kubelet[2683]: I1105 23:40:23.402506 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqmb9\" (UniqueName: \"kubernetes.io/projected/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-kube-api-access-xqmb9\") pod \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\" (UID: \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\") " Nov 5 23:40:23.403129 kubelet[2683]: I1105 23:40:23.402537 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-backend-key-pair\") pod \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\" (UID: \"f4a9be9c-3557-49ca-8c15-d14ae7737d6b\") " Nov 5 23:40:23.416025 kubelet[2683]: I1105 23:40:23.415941 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f4a9be9c-3557-49ca-8c15-d14ae7737d6b" (UID: "f4a9be9c-3557-49ca-8c15-d14ae7737d6b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 23:40:23.416363 kubelet[2683]: I1105 23:40:23.416254 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-kube-api-access-xqmb9" (OuterVolumeSpecName: "kube-api-access-xqmb9") pod "f4a9be9c-3557-49ca-8c15-d14ae7737d6b" (UID: "f4a9be9c-3557-49ca-8c15-d14ae7737d6b"). InnerVolumeSpecName "kube-api-access-xqmb9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 23:40:23.429484 kubelet[2683]: I1105 23:40:23.428842 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f4a9be9c-3557-49ca-8c15-d14ae7737d6b" (UID: "f4a9be9c-3557-49ca-8c15-d14ae7737d6b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 23:40:23.434109 kubelet[2683]: I1105 23:40:23.434013 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nwvzt" podStartSLOduration=1.337769606 podStartE2EDuration="13.433997637s" podCreationTimestamp="2025-11-05 23:40:10 +0000 UTC" firstStartedPulling="2025-11-05 23:40:10.737355966 +0000 UTC m=+21.565248110" lastFinishedPulling="2025-11-05 23:40:22.833583997 +0000 UTC m=+33.661476141" observedRunningTime="2025-11-05 23:40:23.432593067 +0000 UTC m=+34.260485211" watchObservedRunningTime="2025-11-05 23:40:23.433997637 +0000 UTC m=+34.261889741" Nov 5 23:40:23.439462 systemd[1]: Removed slice kubepods-besteffort-podf4a9be9c_3557_49ca_8c15_d14ae7737d6b.slice - libcontainer container kubepods-besteffort-podf4a9be9c_3557_49ca_8c15_d14ae7737d6b.slice. Nov 5 23:40:23.501512 systemd[1]: Created slice kubepods-besteffort-pod013bb512_5088_47f3_94c1_e2243634b474.slice - libcontainer container kubepods-besteffort-pod013bb512_5088_47f3_94c1_e2243634b474.slice. Nov 5 23:40:23.502946 kubelet[2683]: I1105 23:40:23.502909 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 23:40:23.503008 kubelet[2683]: I1105 23:40:23.502951 2683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqmb9\" (UniqueName: \"kubernetes.io/projected/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-kube-api-access-xqmb9\") on node \"localhost\" DevicePath \"\"" Nov 5 23:40:23.503008 kubelet[2683]: I1105 23:40:23.502963 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a9be9c-3557-49ca-8c15-d14ae7737d6b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 23:40:23.578681 containerd[1521]: time="2025-11-05T23:40:23.578628601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\" id:\"74baff246216915af7fb338741f462595373911dce41480779588a8b558d690d\" pid:3817 exit_status:1 exited_at:{seconds:1762386023 nanos:578179118}" Nov 5 23:40:23.603309 kubelet[2683]: I1105 23:40:23.603250 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/013bb512-5088-47f3-94c1-e2243634b474-whisker-ca-bundle\") pod \"whisker-685d97d67-g62jm\" (UID: \"013bb512-5088-47f3-94c1-e2243634b474\") " pod="calico-system/whisker-685d97d67-g62jm" Nov 5 23:40:23.603309 kubelet[2683]: I1105 23:40:23.603302 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlvf\" (UniqueName: \"kubernetes.io/projected/013bb512-5088-47f3-94c1-e2243634b474-kube-api-access-7hlvf\") pod \"whisker-685d97d67-g62jm\" (UID: \"013bb512-5088-47f3-94c1-e2243634b474\") " pod="calico-system/whisker-685d97d67-g62jm" Nov 5 23:40:23.603470 kubelet[2683]: I1105 23:40:23.603341 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/013bb512-5088-47f3-94c1-e2243634b474-whisker-backend-key-pair\") pod \"whisker-685d97d67-g62jm\" (UID: \"013bb512-5088-47f3-94c1-e2243634b474\") " pod="calico-system/whisker-685d97d67-g62jm" Nov 5 23:40:23.618606 systemd[1]: var-lib-kubelet-pods-f4a9be9c\x2d3557\x2d49ca\x2d8c15\x2dd14ae7737d6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxqmb9.mount: Deactivated successfully. Nov 5 23:40:23.618704 systemd[1]: var-lib-kubelet-pods-f4a9be9c\x2d3557\x2d49ca\x2d8c15\x2dd14ae7737d6b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 23:40:23.805462 containerd[1521]: time="2025-11-05T23:40:23.805373255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-685d97d67-g62jm,Uid:013bb512-5088-47f3-94c1-e2243634b474,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:23.968549 systemd-networkd[1427]: calidb8c0dc3408: Link UP Nov 5 23:40:23.968734 systemd-networkd[1427]: calidb8c0dc3408: Gained carrier Nov 5 23:40:23.981624 containerd[1521]: 2025-11-05 23:40:23.826 [INFO][3831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:40:23.981624 containerd[1521]: 2025-11-05 23:40:23.859 [INFO][3831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--685d97d67--g62jm-eth0 whisker-685d97d67- calico-system 013bb512-5088-47f3-94c1-e2243634b474 878 0 2025-11-05 23:40:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:685d97d67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-685d97d67-g62jm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb8c0dc3408 [] [] }} ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-" Nov 5 23:40:23.981624 containerd[1521]: 2025-11-05 23:40:23.859 [INFO][3831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:23.981624 containerd[1521]: 2025-11-05 23:40:23.924 [INFO][3846] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" HandleID="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Workload="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.924 [INFO][3846] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" HandleID="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Workload="localhost-k8s-whisker--685d97d67--g62jm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400048f950), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-685d97d67-g62jm", "timestamp":"2025-11-05 23:40:23.924507362 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.924 [INFO][3846] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.924 [INFO][3846] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.924 [INFO][3846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.935 [INFO][3846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" host="localhost" Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.940 [INFO][3846] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.945 [INFO][3846] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.947 [INFO][3846] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.949 [INFO][3846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:23.982021 containerd[1521]: 2025-11-05 23:40:23.949 [INFO][3846] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" host="localhost" Nov 5 23:40:23.982220 containerd[1521]: 2025-11-05 23:40:23.951 [INFO][3846] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a Nov 5 23:40:23.982220 containerd[1521]: 2025-11-05 23:40:23.954 [INFO][3846] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" host="localhost" Nov 5 23:40:23.982220 containerd[1521]: 2025-11-05 23:40:23.959 [INFO][3846] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" host="localhost" Nov 5 23:40:23.982220 containerd[1521]: 2025-11-05 23:40:23.959 [INFO][3846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" host="localhost" Nov 5 23:40:23.982220 containerd[1521]: 2025-11-05 23:40:23.959 [INFO][3846] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:23.982220 containerd[1521]: 2025-11-05 23:40:23.960 [INFO][3846] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" HandleID="k8s-pod-network.417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Workload="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:23.982337 containerd[1521]: 2025-11-05 23:40:23.962 [INFO][3831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--685d97d67--g62jm-eth0", GenerateName:"whisker-685d97d67-", Namespace:"calico-system", SelfLink:"", UID:"013bb512-5088-47f3-94c1-e2243634b474", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"685d97d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-685d97d67-g62jm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb8c0dc3408", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:23.982337 containerd[1521]: 2025-11-05 23:40:23.962 [INFO][3831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:23.982459 containerd[1521]: 2025-11-05 23:40:23.962 [INFO][3831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb8c0dc3408 ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:23.982459 containerd[1521]: 2025-11-05 23:40:23.968 [INFO][3831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:23.982513 containerd[1521]: 2025-11-05 23:40:23.969 [INFO][3831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--685d97d67--g62jm-eth0", GenerateName:"whisker-685d97d67-", Namespace:"calico-system", SelfLink:"", UID:"013bb512-5088-47f3-94c1-e2243634b474", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"685d97d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a", Pod:"whisker-685d97d67-g62jm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb8c0dc3408", MAC:"76:4d:5e:e6:d1:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:23.982577 containerd[1521]: 2025-11-05 23:40:23.978 [INFO][3831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" Namespace="calico-system" Pod="whisker-685d97d67-g62jm" WorkloadEndpoint="localhost-k8s-whisker--685d97d67--g62jm-eth0" Nov 5 23:40:24.025084 containerd[1521]: time="2025-11-05T23:40:24.025017775Z" level=info msg="connecting to shim 417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a" address="unix:///run/containerd/s/e5edeaca4698168e605af69e014cdcfc743f932e1d6d778fc5de7dbede3fe24b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:24.055572 systemd[1]: Started cri-containerd-417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a.scope - libcontainer container 417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a. Nov 5 23:40:24.067749 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:24.086336 containerd[1521]: time="2025-11-05T23:40:24.086274028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-685d97d67-g62jm,Uid:013bb512-5088-47f3-94c1-e2243634b474,Namespace:calico-system,Attempt:0,} returns sandbox id \"417273864ddd01acbe26f8e5c6d0fa5a0a81b969c8553568636451989342419a\"" Nov 5 23:40:24.087741 containerd[1521]: time="2025-11-05T23:40:24.087693278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:40:24.313457 containerd[1521]: time="2025-11-05T23:40:24.313283361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:24.314510 containerd[1521]: time="2025-11-05T23:40:24.314451568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:40:24.314510 containerd[1521]: time="2025-11-05T23:40:24.314493529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:40:24.316684 kubelet[2683]: E1105 23:40:24.316613 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:40:24.318130 kubelet[2683]: E1105 23:40:24.318088 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:40:24.329560 kubelet[2683]: E1105 23:40:24.329503 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aff672a146af41bb8a3ea782c09d4a4b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hlvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-685d97d67-g62jm_calico-system(013bb512-5088-47f3-94c1-e2243634b474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:24.332201 containerd[1521]: time="2025-11-05T23:40:24.331535964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:40:24.546097 containerd[1521]: time="2025-11-05T23:40:24.546050172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:24.599470 containerd[1521]: time="2025-11-05T23:40:24.597984562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:40:24.599470 containerd[1521]: time="2025-11-05T23:40:24.598041403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:40:24.599676 kubelet[2683]: E1105 23:40:24.598235 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:40:24.599676 kubelet[2683]: E1105 23:40:24.598289 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:40:24.600354 kubelet[2683]: E1105 23:40:24.600100 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hlvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-685d97d67-g62jm_calico-system(013bb512-5088-47f3-94c1-e2243634b474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:24.601472 kubelet[2683]: E1105 23:40:24.601420 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-685d97d67-g62jm" podUID="013bb512-5088-47f3-94c1-e2243634b474" Nov 5 23:40:24.638387 containerd[1521]: time="2025-11-05T23:40:24.638340715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\" id:\"45b8a7fdd0c73debdc0486c6b8e1afb1ff43e726808cb282567bb6f63ffaa5a2\" pid:3917 exit_status:1 exited_at:{seconds:1762386024 nanos:637923552}" Nov 5 23:40:24.867457 systemd-networkd[1427]: vxlan.calico: Link UP Nov 5 23:40:24.867463 systemd-networkd[1427]: vxlan.calico: Gained carrier Nov 5 23:40:25.283221 kubelet[2683]: I1105 23:40:25.283162 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a9be9c-3557-49ca-8c15-d14ae7737d6b" path="/var/lib/kubelet/pods/f4a9be9c-3557-49ca-8c15-d14ae7737d6b/volumes" Nov 5 23:40:25.413191 kubelet[2683]: E1105 23:40:25.413143 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-685d97d67-g62jm" podUID="013bb512-5088-47f3-94c1-e2243634b474" Nov 5 23:40:25.495819 containerd[1521]: time="2025-11-05T23:40:25.495779490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\" id:\"c21d0c2f9207a7150593bbc59860af10eb3c899db28475b6e9fcd19d8553bb5f\" pid:4141 exit_status:1 exited_at:{seconds:1762386025 nanos:495469888}" Nov 5 23:40:25.863867 systemd-networkd[1427]: calidb8c0dc3408: Gained IPv6LL Nov 5 23:40:26.246586 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Nov 5 23:40:29.272950 containerd[1521]: time="2025-11-05T23:40:29.272894602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d449f5dd-whvqx,Uid:b316df6c-d77d-45e2-a17b-32be21557bd5,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:29.273412 containerd[1521]: time="2025-11-05T23:40:29.273365404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrn7f,Uid:17800fb7-e812-49ed-8d87-f115a699b129,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:29.466931 systemd-networkd[1427]: cali85a2fb65419: Link UP Nov 5 23:40:29.467974 systemd-networkd[1427]: cali85a2fb65419: Gained carrier Nov 5 23:40:29.493637 containerd[1521]: 2025-11-05 23:40:29.323 [INFO][4162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--wrn7f-eth0 goldmane-666569f655- calico-system 17800fb7-e812-49ed-8d87-f115a699b129 817 0 2025-11-05 23:40:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-wrn7f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali85a2fb65419 [] [] }} ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-" Nov 5 23:40:29.493637 containerd[1521]: 2025-11-05 23:40:29.323 [INFO][4162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.493637 containerd[1521]: 2025-11-05 23:40:29.366 [INFO][4189] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" HandleID="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Workload="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.367 [INFO][4189] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" HandleID="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Workload="localhost-k8s-goldmane--666569f655--wrn7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-wrn7f", "timestamp":"2025-11-05 23:40:29.366512632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.367 [INFO][4189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.367 [INFO][4189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.367 [INFO][4189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.381 [INFO][4189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" host="localhost" Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.398 [INFO][4189] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.422 [INFO][4189] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.429 [INFO][4189] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.435 [INFO][4189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:29.493866 containerd[1521]: 2025-11-05 23:40:29.435 [INFO][4189] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" host="localhost" Nov 5 23:40:29.494062 containerd[1521]: 2025-11-05 23:40:29.439 [INFO][4189] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465 Nov 5 23:40:29.494062 containerd[1521]: 2025-11-05 23:40:29.446 [INFO][4189] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" host="localhost" Nov 5 23:40:29.494062 containerd[1521]: 2025-11-05 23:40:29.455 [INFO][4189] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" host="localhost" Nov 5 23:40:29.494062 containerd[1521]: 2025-11-05 23:40:29.455 [INFO][4189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" host="localhost" Nov 5 23:40:29.494062 containerd[1521]: 2025-11-05 23:40:29.455 [INFO][4189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:29.494062 containerd[1521]: 2025-11-05 23:40:29.455 [INFO][4189] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" HandleID="k8s-pod-network.c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Workload="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.494168 containerd[1521]: 2025-11-05 23:40:29.463 [INFO][4162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wrn7f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"17800fb7-e812-49ed-8d87-f115a699b129", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-wrn7f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85a2fb65419", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:29.494168 containerd[1521]: 2025-11-05 23:40:29.463 [INFO][4162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.494233 containerd[1521]: 2025-11-05 23:40:29.463 [INFO][4162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85a2fb65419 ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.494233 containerd[1521]: 2025-11-05 23:40:29.469 [INFO][4162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.494271 containerd[1521]: 2025-11-05 23:40:29.475 [INFO][4162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wrn7f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"17800fb7-e812-49ed-8d87-f115a699b129", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465", Pod:"goldmane-666569f655-wrn7f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85a2fb65419", MAC:"ae:42:3c:7f:04:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:29.494317 containerd[1521]: 2025-11-05 23:40:29.486 [INFO][4162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" Namespace="calico-system" Pod="goldmane-666569f655-wrn7f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wrn7f-eth0" Nov 5 23:40:29.518870 containerd[1521]: time="2025-11-05T23:40:29.518251164Z" level=info msg="connecting to shim c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465" address="unix:///run/containerd/s/ff4f48a7b36c1566ae7f3ecd57059d7874bb4103fa4598c16c6629c7536fac60" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:29.540563 systemd[1]: Started cri-containerd-c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465.scope - libcontainer container c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465. Nov 5 23:40:29.544538 systemd-networkd[1427]: cali94c6fc694f4: Link UP Nov 5 23:40:29.545239 systemd-networkd[1427]: cali94c6fc694f4: Gained carrier Nov 5 23:40:29.563486 containerd[1521]: 2025-11-05 23:40:29.327 [INFO][4169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0 calico-kube-controllers-58d449f5dd- calico-system b316df6c-d77d-45e2-a17b-32be21557bd5 814 0 2025-11-05 23:40:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58d449f5dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58d449f5dd-whvqx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali94c6fc694f4 [] [] }} ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-" Nov 5 23:40:29.563486 containerd[1521]: 2025-11-05 23:40:29.328 [INFO][4169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.563486 containerd[1521]: 2025-11-05 23:40:29.369 [INFO][4195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" HandleID="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Workload="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.370 [INFO][4195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" HandleID="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Workload="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58d449f5dd-whvqx", "timestamp":"2025-11-05 23:40:29.369109967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.370 [INFO][4195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.455 [INFO][4195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.455 [INFO][4195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.484 [INFO][4195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" host="localhost" Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.494 [INFO][4195] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.514 [INFO][4195] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.516 [INFO][4195] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.519 [INFO][4195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:29.563676 containerd[1521]: 2025-11-05 23:40:29.519 [INFO][4195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" host="localhost" Nov 5 23:40:29.563885 containerd[1521]: 2025-11-05 23:40:29.521 [INFO][4195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b Nov 5 23:40:29.563885 containerd[1521]: 2025-11-05 23:40:29.528 [INFO][4195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" host="localhost" Nov 5 23:40:29.563885 containerd[1521]: 2025-11-05 23:40:29.538 [INFO][4195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" host="localhost" Nov 5 23:40:29.563885 containerd[1521]: 2025-11-05 23:40:29.538 [INFO][4195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" host="localhost" Nov 5 23:40:29.563885 containerd[1521]: 2025-11-05 23:40:29.538 [INFO][4195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:29.563885 containerd[1521]: 2025-11-05 23:40:29.538 [INFO][4195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" HandleID="k8s-pod-network.2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Workload="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.563989 containerd[1521]: 2025-11-05 23:40:29.541 [INFO][4169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0", GenerateName:"calico-kube-controllers-58d449f5dd-", Namespace:"calico-system", SelfLink:"", UID:"b316df6c-d77d-45e2-a17b-32be21557bd5", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58d449f5dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58d449f5dd-whvqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94c6fc694f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:29.564037 containerd[1521]: 2025-11-05 23:40:29.542 [INFO][4169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.564037 containerd[1521]: 2025-11-05 23:40:29.542 [INFO][4169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94c6fc694f4 ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.564037 containerd[1521]: 2025-11-05 23:40:29.545 [INFO][4169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.564090 containerd[1521]: 2025-11-05 23:40:29.546 [INFO][4169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0", GenerateName:"calico-kube-controllers-58d449f5dd-", Namespace:"calico-system", SelfLink:"", UID:"b316df6c-d77d-45e2-a17b-32be21557bd5", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58d449f5dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b", Pod:"calico-kube-controllers-58d449f5dd-whvqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94c6fc694f4", MAC:"36:bb:d2:a8:b8:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:29.564136 containerd[1521]: 2025-11-05 23:40:29.560 [INFO][4169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" Namespace="calico-system" Pod="calico-kube-controllers-58d449f5dd-whvqx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58d449f5dd--whvqx-eth0" Nov 5 23:40:29.592123 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:55424.service - OpenSSH per-connection server daemon (10.0.0.1:55424). Nov 5 23:40:29.598678 containerd[1521]: time="2025-11-05T23:40:29.598631196Z" level=info msg="connecting to shim 2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b" address="unix:///run/containerd/s/142bb0996f870fd3caa81ed2a3b995e64395fe9230acaf1658b06a066027aa82" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:29.598810 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:29.622603 systemd[1]: Started cri-containerd-2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b.scope - libcontainer container 2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b. Nov 5 23:40:29.630722 containerd[1521]: time="2025-11-05T23:40:29.630532544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wrn7f,Uid:17800fb7-e812-49ed-8d87-f115a699b129,Namespace:calico-system,Attempt:0,} returns sandbox id \"c83bf3daa741575dca3e93d448e7832a9b715c306114c77fe068d2dd8837f465\"" Nov 5 23:40:29.642475 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:29.645079 containerd[1521]: time="2025-11-05T23:40:29.645039629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:40:29.667418 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 55424 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:29.669814 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:29.674787 containerd[1521]: time="2025-11-05T23:40:29.674744964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d449f5dd-whvqx,Uid:b316df6c-d77d-45e2-a17b-32be21557bd5,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d2ad6493b0558c98768017f05df3e049869d1e0a02aa9c7e5a40061be87885b\"" Nov 5 23:40:29.675345 systemd-logind[1503]: New session 8 of user core. Nov 5 23:40:29.681565 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 23:40:29.843256 sshd[4324]: Connection closed by 10.0.0.1 port 55424 Nov 5 23:40:29.843524 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:29.847734 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Nov 5 23:40:29.847967 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:55424.service: Deactivated successfully. Nov 5 23:40:29.850343 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 23:40:29.851773 systemd-logind[1503]: Removed session 8. Nov 5 23:40:29.855865 containerd[1521]: time="2025-11-05T23:40:29.855802148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:29.856849 containerd[1521]: time="2025-11-05T23:40:29.856796274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:40:29.856902 containerd[1521]: time="2025-11-05T23:40:29.856886715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:40:29.857076 kubelet[2683]: E1105 23:40:29.857040 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:40:29.857305 kubelet[2683]: E1105 23:40:29.857094 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:40:29.857572 kubelet[2683]: E1105 23:40:29.857469 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q466l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wrn7f_calico-system(17800fb7-e812-49ed-8d87-f115a699b129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:29.857685 containerd[1521]: time="2025-11-05T23:40:29.857621079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:40:29.858980 kubelet[2683]: E1105 23:40:29.858937 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:40:30.083273 containerd[1521]: time="2025-11-05T23:40:30.083199272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:30.104472 containerd[1521]: time="2025-11-05T23:40:30.104333113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:40:30.104472 containerd[1521]: time="2025-11-05T23:40:30.104430233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:40:30.104874 kubelet[2683]: E1105 23:40:30.104676 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:40:30.105098 kubelet[2683]: E1105 23:40:30.105061 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:40:30.105614 kubelet[2683]: E1105 23:40:30.105534 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9bpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d449f5dd-whvqx_calico-system(b316df6c-d77d-45e2-a17b-32be21557bd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:30.107294 kubelet[2683]: E1105 23:40:30.107261 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:40:30.273867 containerd[1521]: time="2025-11-05T23:40:30.273783762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxrw,Uid:43750443-b18a-48ac-8d08-d0590a9d536c,Namespace:kube-system,Attempt:0,}" Nov 5 23:40:30.275203 containerd[1521]: time="2025-11-05T23:40:30.275155570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-fbmq2,Uid:212ccd9f-8d71-41e6-a6d1-799624e298a9,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:40:30.391469 systemd-networkd[1427]: cali56664072d32: Link UP Nov 5 23:40:30.393510 systemd-networkd[1427]: cali56664072d32: Gained carrier Nov 5 23:40:30.406608 containerd[1521]: 2025-11-05 23:40:30.315 [INFO][4344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0 coredns-674b8bbfcf- kube-system 43750443-b18a-48ac-8d08-d0590a9d536c 813 0 2025-11-05 23:39:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4rxrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56664072d32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-" Nov 5 23:40:30.406608 containerd[1521]: 2025-11-05 23:40:30.315 [INFO][4344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.406608 containerd[1521]: 2025-11-05 23:40:30.346 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" HandleID="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Workload="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.347 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" HandleID="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Workload="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001377e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4rxrw", "timestamp":"2025-11-05 23:40:30.34684758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.347 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.347 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.347 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.359 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" host="localhost" Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.365 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.370 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.371 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.374 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:30.406804 containerd[1521]: 2025-11-05 23:40:30.374 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" host="localhost" Nov 5 23:40:30.407053 containerd[1521]: 2025-11-05 23:40:30.375 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17 Nov 5 23:40:30.407053 containerd[1521]: 2025-11-05 23:40:30.380 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" host="localhost" Nov 5 23:40:30.407053 containerd[1521]: 2025-11-05 23:40:30.387 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" host="localhost" Nov 5 23:40:30.407053 containerd[1521]: 2025-11-05 23:40:30.387 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" host="localhost" Nov 5 23:40:30.407053 containerd[1521]: 2025-11-05 23:40:30.387 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:30.407053 containerd[1521]: 2025-11-05 23:40:30.387 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" HandleID="k8s-pod-network.78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Workload="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.407167 containerd[1521]: 2025-11-05 23:40:30.389 [INFO][4344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"43750443-b18a-48ac-8d08-d0590a9d536c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 39, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4rxrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56664072d32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:30.407222 containerd[1521]: 2025-11-05 23:40:30.389 [INFO][4344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.407222 containerd[1521]: 2025-11-05 23:40:30.389 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56664072d32 ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.407222 containerd[1521]: 2025-11-05 23:40:30.393 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.407279 containerd[1521]: 2025-11-05 23:40:30.394 [INFO][4344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"43750443-b18a-48ac-8d08-d0590a9d536c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 39, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17", Pod:"coredns-674b8bbfcf-4rxrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56664072d32", MAC:"56:d1:a5:a6:89:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:30.407279 containerd[1521]: 2025-11-05 23:40:30.404 [INFO][4344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rxrw-eth0" Nov 5 23:40:30.423575 kubelet[2683]: E1105 23:40:30.423526 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:40:30.426419 kubelet[2683]: E1105 23:40:30.426299 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:40:30.455166 containerd[1521]: time="2025-11-05T23:40:30.455112359Z" level=info msg="connecting to shim 78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17" address="unix:///run/containerd/s/1af498f854902b2bf7b6ff822b928d3a42491f5741a6417e8f053197035728ac" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:30.487640 systemd[1]: Started cri-containerd-78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17.scope - libcontainer container 78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17. Nov 5 23:40:30.502807 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:30.531460 systemd-networkd[1427]: calif4798d58e5c: Link UP Nov 5 23:40:30.532364 systemd-networkd[1427]: calif4798d58e5c: Gained carrier Nov 5 23:40:30.541768 containerd[1521]: time="2025-11-05T23:40:30.541724935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxrw,Uid:43750443-b18a-48ac-8d08-d0590a9d536c,Namespace:kube-system,Attempt:0,} returns sandbox id \"78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17\"" Nov 5 23:40:30.550547 containerd[1521]: time="2025-11-05T23:40:30.550510865Z" level=info msg="CreateContainer within sandbox \"78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.324 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0 calico-apiserver-6b5dd884c4- calico-apiserver 212ccd9f-8d71-41e6-a6d1-799624e298a9 815 0 2025-11-05 23:40:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5dd884c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5dd884c4-fbmq2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif4798d58e5c [] [] }} ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.324 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.355 [INFO][4379] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" HandleID="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Workload="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.355 [INFO][4379] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" HandleID="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Workload="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b5dd884c4-fbmq2", "timestamp":"2025-11-05 23:40:30.355314189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.355 [INFO][4379] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.387 [INFO][4379] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.387 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.460 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.477 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.484 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.486 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.490 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.490 [INFO][4379] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.493 [INFO][4379] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47 Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.504 [INFO][4379] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.524 [INFO][4379] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.524 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" host="localhost" Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.524 [INFO][4379] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:30.552046 containerd[1521]: 2025-11-05 23:40:30.524 [INFO][4379] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" HandleID="k8s-pod-network.32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Workload="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.552571 containerd[1521]: 2025-11-05 23:40:30.527 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0", GenerateName:"calico-apiserver-6b5dd884c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"212ccd9f-8d71-41e6-a6d1-799624e298a9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5dd884c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5dd884c4-fbmq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4798d58e5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:30.552571 containerd[1521]: 2025-11-05 23:40:30.527 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.552571 containerd[1521]: 2025-11-05 23:40:30.527 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4798d58e5c ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.552571 containerd[1521]: 2025-11-05 23:40:30.534 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.552571 containerd[1521]: 2025-11-05 23:40:30.537 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0", GenerateName:"calico-apiserver-6b5dd884c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"212ccd9f-8d71-41e6-a6d1-799624e298a9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5dd884c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47", Pod:"calico-apiserver-6b5dd884c4-fbmq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4798d58e5c", MAC:"da:e5:73:20:bd:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:30.552571 containerd[1521]: 2025-11-05 23:40:30.548 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-fbmq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--fbmq2-eth0" Nov 5 23:40:30.569355 containerd[1521]: time="2025-11-05T23:40:30.569246332Z" level=info msg="Container cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:40:30.569287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421586526.mount: Deactivated successfully. Nov 5 23:40:30.579201 containerd[1521]: time="2025-11-05T23:40:30.578609546Z" level=info msg="CreateContainer within sandbox \"78a3d81a06fcce326da2714c0534c47cdf39dc9970698e196bf7c71c85442d17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff\"" Nov 5 23:40:30.580346 containerd[1521]: time="2025-11-05T23:40:30.580295876Z" level=info msg="connecting to shim 32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47" address="unix:///run/containerd/s/5b1a743a27fd00f66282bd4cd139eb5d287b4b741a98afe50ead0bf5f53cbb66" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:30.580781 containerd[1521]: time="2025-11-05T23:40:30.580734038Z" level=info msg="StartContainer for \"cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff\"" Nov 5 23:40:30.584559 containerd[1521]: time="2025-11-05T23:40:30.584494140Z" level=info msg="connecting to shim cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff" address="unix:///run/containerd/s/1af498f854902b2bf7b6ff822b928d3a42491f5741a6417e8f053197035728ac" protocol=ttrpc version=3 Nov 5 23:40:30.609581 systemd[1]: Started cri-containerd-32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47.scope - libcontainer container 32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47. Nov 5 23:40:30.613683 systemd[1]: Started cri-containerd-cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff.scope - libcontainer container cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff. Nov 5 23:40:30.629079 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:30.651122 containerd[1521]: time="2025-11-05T23:40:30.650450437Z" level=info msg="StartContainer for \"cc714673d79d5d59b35fbf865d71f1d29d35cc034ecef8162837ad0ec17982ff\" returns successfully" Nov 5 23:40:30.660369 containerd[1521]: time="2025-11-05T23:40:30.660292653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-fbmq2,Uid:212ccd9f-8d71-41e6-a6d1-799624e298a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"32fdb10a2648a3f0d53bb1d9d603caf873de4e27b2235561a8592c0b0f9e0f47\"" Nov 5 23:40:30.663456 containerd[1521]: time="2025-11-05T23:40:30.663419631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:40:30.884721 containerd[1521]: time="2025-11-05T23:40:30.884653097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:30.886208 containerd[1521]: time="2025-11-05T23:40:30.885695383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:40:30.886208 containerd[1521]: time="2025-11-05T23:40:30.885762023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:40:30.886313 kubelet[2683]: E1105 23:40:30.886056 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:30.886313 kubelet[2683]: E1105 23:40:30.886104 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:30.886313 kubelet[2683]: E1105 23:40:30.886248 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mlzw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b5dd884c4-fbmq2_calico-apiserver(212ccd9f-8d71-41e6-a6d1-799624e298a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:30.887517 kubelet[2683]: E1105 23:40:30.887470 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:40:31.110658 systemd-networkd[1427]: cali94c6fc694f4: Gained IPv6LL Nov 5 23:40:31.238631 systemd-networkd[1427]: cali85a2fb65419: Gained IPv6LL Nov 5 23:40:31.285741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224610494.mount: Deactivated successfully. Nov 5 23:40:31.436070 kubelet[2683]: E1105 23:40:31.435955 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:40:31.436443 kubelet[2683]: E1105 23:40:31.436336 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:40:31.438764 kubelet[2683]: E1105 23:40:31.438706 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:40:31.498152 kubelet[2683]: I1105 23:40:31.498056 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4rxrw" podStartSLOduration=36.498039289 podStartE2EDuration="36.498039289s" podCreationTimestamp="2025-11-05 23:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:40:31.497506286 +0000 UTC m=+42.325398430" watchObservedRunningTime="2025-11-05 23:40:31.498039289 +0000 UTC m=+42.325931393" Nov 5 23:40:31.686677 systemd-networkd[1427]: calif4798d58e5c: Gained IPv6LL Nov 5 23:40:32.006564 systemd-networkd[1427]: cali56664072d32: Gained IPv6LL Nov 5 23:40:32.437993 kubelet[2683]: E1105 23:40:32.437902 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:40:34.272436 containerd[1521]: time="2025-11-05T23:40:34.272360696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzcdk,Uid:99b6a710-372d-4a19-969a-3a96da2ba20c,Namespace:calico-system,Attempt:0,}" Nov 5 23:40:34.272436 containerd[1521]: time="2025-11-05T23:40:34.272417577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-dpl4b,Uid:25336bb2-bdfb-4d75-81ae-8315253dce4b,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:40:34.272968 containerd[1521]: time="2025-11-05T23:40:34.272420217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddqx2,Uid:fd51956a-08f9-406d-bffa-5b4a2fd29274,Namespace:kube-system,Attempt:0,}" Nov 5 23:40:34.415187 systemd-networkd[1427]: cali6f2c6d66434: Link UP Nov 5 23:40:34.415323 systemd-networkd[1427]: cali6f2c6d66434: Gained carrier Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.341 [INFO][4565] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0 coredns-674b8bbfcf- kube-system fd51956a-08f9-406d-bffa-5b4a2fd29274 818 0 2025-11-05 23:39:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ddqx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f2c6d66434 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.341 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.372 [INFO][4591] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" HandleID="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Workload="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.372 [INFO][4591] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" HandleID="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Workload="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000506ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ddqx2", "timestamp":"2025-11-05 23:40:34.372251169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.372 [INFO][4591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.372 [INFO][4591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.372 [INFO][4591] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.383 [INFO][4591] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.390 [INFO][4591] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.394 [INFO][4591] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.396 [INFO][4591] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.399 [INFO][4591] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.399 [INFO][4591] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.400 [INFO][4591] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.404 [INFO][4591] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.410 [INFO][4591] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.410 [INFO][4591] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" host="localhost" Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.410 [INFO][4591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:34.435991 containerd[1521]: 2025-11-05 23:40:34.410 [INFO][4591] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" HandleID="k8s-pod-network.52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Workload="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.436796 containerd[1521]: 2025-11-05 23:40:34.413 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd51956a-08f9-406d-bffa-5b4a2fd29274", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 39, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ddqx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f2c6d66434", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:34.436796 containerd[1521]: 2025-11-05 23:40:34.413 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.436796 containerd[1521]: 2025-11-05 23:40:34.413 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f2c6d66434 ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.436796 containerd[1521]: 2025-11-05 23:40:34.416 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.436796 containerd[1521]: 2025-11-05 23:40:34.421 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd51956a-08f9-406d-bffa-5b4a2fd29274", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 39, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df", Pod:"coredns-674b8bbfcf-ddqx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f2c6d66434", MAC:"6e:fb:a6:c5:14:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:34.436796 containerd[1521]: 2025-11-05 23:40:34.432 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddqx2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddqx2-eth0" Nov 5 23:40:34.467286 containerd[1521]: time="2025-11-05T23:40:34.466766455Z" level=info msg="connecting to shim 52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df" address="unix:///run/containerd/s/e6c83d80b12aad2d139c6113e0d014bfd463c6d9c4fbbaf23473f4de6f3951ad" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:34.494614 systemd[1]: Started cri-containerd-52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df.scope - libcontainer container 52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df. Nov 5 23:40:34.507132 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:34.535546 containerd[1521]: time="2025-11-05T23:40:34.535312967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddqx2,Uid:fd51956a-08f9-406d-bffa-5b4a2fd29274,Namespace:kube-system,Attempt:0,} returns sandbox id \"52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df\"" Nov 5 23:40:34.543634 containerd[1521]: time="2025-11-05T23:40:34.543597889Z" level=info msg="CreateContainer within sandbox \"52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:40:34.552992 systemd-networkd[1427]: cali25d1a9a381e: Link UP Nov 5 23:40:34.553414 systemd-networkd[1427]: cali25d1a9a381e: Gained carrier Nov 5 23:40:34.565456 containerd[1521]: time="2025-11-05T23:40:34.564176435Z" level=info msg="Container 7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:40:34.571737 containerd[1521]: time="2025-11-05T23:40:34.571442392Z" level=info msg="CreateContainer within sandbox \"52002090f9f19308cc716d153a41c00f1209fdc73d42157d0646fef824dbf2df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20\"" Nov 5 23:40:34.574379 containerd[1521]: time="2025-11-05T23:40:34.574332087Z" level=info msg="StartContainer for \"7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20\"" Nov 5 23:40:34.577049 containerd[1521]: time="2025-11-05T23:40:34.576695339Z" level=info msg="connecting to shim 7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20" address="unix:///run/containerd/s/e6c83d80b12aad2d139c6113e0d014bfd463c6d9c4fbbaf23473f4de6f3951ad" protocol=ttrpc version=3 Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.331 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kzcdk-eth0 csi-node-driver- calico-system 99b6a710-372d-4a19-969a-3a96da2ba20c 708 0 2025-11-05 23:40:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kzcdk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali25d1a9a381e [] [] }} ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.331 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.374 [INFO][4584] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" HandleID="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Workload="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.374 [INFO][4584] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" HandleID="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Workload="localhost-k8s-csi--node--driver--kzcdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kzcdk", "timestamp":"2025-11-05 23:40:34.374576301 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.374 [INFO][4584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.410 [INFO][4584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.410 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.507 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.514 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.522 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.525 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.528 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.528 [INFO][4584] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.530 [INFO][4584] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7 Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.537 [INFO][4584] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.546 [INFO][4584] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.546 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" host="localhost" Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.546 [INFO][4584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:34.577275 containerd[1521]: 2025-11-05 23:40:34.546 [INFO][4584] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" HandleID="k8s-pod-network.b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Workload="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.577850 containerd[1521]: 2025-11-05 23:40:34.549 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kzcdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99b6a710-372d-4a19-969a-3a96da2ba20c", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kzcdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25d1a9a381e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:34.577850 containerd[1521]: 2025-11-05 23:40:34.549 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.577850 containerd[1521]: 2025-11-05 23:40:34.549 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25d1a9a381e ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.577850 containerd[1521]: 2025-11-05 23:40:34.553 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.577850 containerd[1521]: 2025-11-05 23:40:34.554 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kzcdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99b6a710-372d-4a19-969a-3a96da2ba20c", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7", Pod:"csi-node-driver-kzcdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25d1a9a381e", MAC:"42:43:fe:4a:61:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:34.577850 containerd[1521]: 2025-11-05 23:40:34.571 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" Namespace="calico-system" Pod="csi-node-driver-kzcdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--kzcdk-eth0" Nov 5 23:40:34.597608 systemd[1]: Started cri-containerd-7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20.scope - libcontainer container 7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20. Nov 5 23:40:34.603093 containerd[1521]: time="2025-11-05T23:40:34.603020794Z" level=info msg="connecting to shim b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7" address="unix:///run/containerd/s/e41fdbb9cba6930cc8debb00b095b51ca4280ee0b781e5f69eba0939c5ec5456" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:34.634775 systemd[1]: Started cri-containerd-b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7.scope - libcontainer container b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7. Nov 5 23:40:34.641345 containerd[1521]: time="2025-11-05T23:40:34.641295031Z" level=info msg="StartContainer for \"7c2cc5f52aaef5d4b5f4f87df9aa76e8bcdd9a7d5940f07da1645b3e99a58e20\" returns successfully" Nov 5 23:40:34.670283 systemd-networkd[1427]: cali9db2160e898: Link UP Nov 5 23:40:34.671381 systemd-networkd[1427]: cali9db2160e898: Gained carrier Nov 5 23:40:34.674450 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.339 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0 calico-apiserver-6b5dd884c4- calico-apiserver 25336bb2-bdfb-4d75-81ae-8315253dce4b 810 0 2025-11-05 23:40:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5dd884c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5dd884c4-dpl4b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9db2160e898 [] [] }} ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.339 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.385 [INFO][4597] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" HandleID="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Workload="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.385 [INFO][4597] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" HandleID="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Workload="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011e680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b5dd884c4-dpl4b", "timestamp":"2025-11-05 23:40:34.385044435 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.385 [INFO][4597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.546 [INFO][4597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.546 [INFO][4597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.585 [INFO][4597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.614 [INFO][4597] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.625 [INFO][4597] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.630 [INFO][4597] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.633 [INFO][4597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.633 [INFO][4597] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.636 [INFO][4597] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65 Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.643 [INFO][4597] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.653 [INFO][4597] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.654 [INFO][4597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" host="localhost" Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.654 [INFO][4597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:40:34.687007 containerd[1521]: 2025-11-05 23:40:34.654 [INFO][4597] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" HandleID="k8s-pod-network.21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Workload="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.687692 containerd[1521]: 2025-11-05 23:40:34.658 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0", GenerateName:"calico-apiserver-6b5dd884c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"25336bb2-bdfb-4d75-81ae-8315253dce4b", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5dd884c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5dd884c4-dpl4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9db2160e898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:34.687692 containerd[1521]: 2025-11-05 23:40:34.658 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.687692 containerd[1521]: 2025-11-05 23:40:34.658 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9db2160e898 ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.687692 containerd[1521]: 2025-11-05 23:40:34.670 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.687692 containerd[1521]: 2025-11-05 23:40:34.671 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0", GenerateName:"calico-apiserver-6b5dd884c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"25336bb2-bdfb-4d75-81ae-8315253dce4b", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5dd884c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65", Pod:"calico-apiserver-6b5dd884c4-dpl4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9db2160e898", MAC:"06:ba:b6:6f:8d:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:40:34.687692 containerd[1521]: 2025-11-05 23:40:34.682 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" Namespace="calico-apiserver" Pod="calico-apiserver-6b5dd884c4-dpl4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5dd884c4--dpl4b-eth0" Nov 5 23:40:34.703098 containerd[1521]: time="2025-11-05T23:40:34.702111223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzcdk,Uid:99b6a710-372d-4a19-969a-3a96da2ba20c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b13258e3728bc2447e553ff001c2e2f649ff69779732cc9084af9f331792c8c7\"" Nov 5 23:40:34.707522 containerd[1521]: time="2025-11-05T23:40:34.707479051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:40:34.719525 containerd[1521]: time="2025-11-05T23:40:34.719468592Z" level=info msg="connecting to shim 21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65" address="unix:///run/containerd/s/37e3f1407291e8ef7c6bf4d1bbb936a06b8370986607366cdd0daf45c7902b37" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:40:34.741596 systemd[1]: Started cri-containerd-21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65.scope - libcontainer container 21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65. Nov 5 23:40:34.753145 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:40:34.775133 containerd[1521]: time="2025-11-05T23:40:34.775094918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5dd884c4-dpl4b,Uid:25336bb2-bdfb-4d75-81ae-8315253dce4b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"21ab8ea1719a2c3b140b40d83cc7bc713e48c9e26a2c4a9c1cb19de2f12a8e65\"" Nov 5 23:40:34.860006 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:55450.service - OpenSSH per-connection server daemon (10.0.0.1:55450). Nov 5 23:40:34.939953 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 55450 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:34.942180 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:34.947305 systemd-logind[1503]: New session 9 of user core. Nov 5 23:40:34.961144 containerd[1521]: time="2025-11-05T23:40:34.961095033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:34.962112 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 23:40:34.963131 containerd[1521]: time="2025-11-05T23:40:34.963084244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:40:34.963202 containerd[1521]: time="2025-11-05T23:40:34.963176844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:40:34.963354 kubelet[2683]: E1105 23:40:34.963304 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:40:34.963816 kubelet[2683]: E1105 23:40:34.963378 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:40:34.963846 containerd[1521]: time="2025-11-05T23:40:34.963715407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:40:34.963999 kubelet[2683]: E1105 23:40:34.963892 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:35.177445 containerd[1521]: time="2025-11-05T23:40:35.177193519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:35.190473 containerd[1521]: time="2025-11-05T23:40:35.190342025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:40:35.190917 containerd[1521]: time="2025-11-05T23:40:35.190414305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:40:35.191138 kubelet[2683]: E1105 23:40:35.191105 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:35.191326 kubelet[2683]: E1105 23:40:35.191269 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:35.191902 kubelet[2683]: E1105 23:40:35.191572 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zllw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b5dd884c4-dpl4b_calico-apiserver(25336bb2-bdfb-4d75-81ae-8315253dce4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:35.192270 containerd[1521]: time="2025-11-05T23:40:35.191671192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:40:35.193783 kubelet[2683]: E1105 23:40:35.193733 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b" Nov 5 23:40:35.243234 sshd[4821]: Connection closed by 10.0.0.1 port 55450 Nov 5 23:40:35.244230 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:35.249792 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:55450.service: Deactivated successfully. Nov 5 23:40:35.252043 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 23:40:35.253512 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Nov 5 23:40:35.255604 systemd-logind[1503]: Removed session 9. Nov 5 23:40:35.415968 containerd[1521]: time="2025-11-05T23:40:35.415907713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:35.416890 containerd[1521]: time="2025-11-05T23:40:35.416854118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:40:35.416977 containerd[1521]: time="2025-11-05T23:40:35.416941998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:40:35.417185 kubelet[2683]: E1105 23:40:35.417132 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:40:35.417336 kubelet[2683]: E1105 23:40:35.417278 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:40:35.417595 kubelet[2683]: E1105 23:40:35.417541 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:35.418799 kubelet[2683]: E1105 23:40:35.418759 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:35.452821 kubelet[2683]: E1105 23:40:35.452706 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b" Nov 5 23:40:35.460432 kubelet[2683]: E1105 23:40:35.460276 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:35.463054 systemd-networkd[1427]: cali6f2c6d66434: Gained IPv6LL Nov 5 23:40:35.490930 kubelet[2683]: I1105 23:40:35.490812 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ddqx2" podStartSLOduration=40.490793367 podStartE2EDuration="40.490793367s" podCreationTimestamp="2025-11-05 23:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:40:35.489577681 +0000 UTC m=+46.317469825" watchObservedRunningTime="2025-11-05 23:40:35.490793367 +0000 UTC m=+46.318685511" Nov 5 23:40:36.230790 systemd-networkd[1427]: cali25d1a9a381e: Gained IPv6LL Nov 5 23:40:36.359959 systemd-networkd[1427]: cali9db2160e898: Gained IPv6LL Nov 5 23:40:36.461330 kubelet[2683]: E1105 23:40:36.461061 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b" Nov 5 23:40:36.464347 kubelet[2683]: E1105 23:40:36.463627 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:38.273938 containerd[1521]: time="2025-11-05T23:40:38.273794472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:40:38.586781 containerd[1521]: time="2025-11-05T23:40:38.586634877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:38.588321 containerd[1521]: time="2025-11-05T23:40:38.588260085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:40:38.588444 containerd[1521]: time="2025-11-05T23:40:38.588328525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:40:38.588551 kubelet[2683]: E1105 23:40:38.588506 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:40:38.588945 kubelet[2683]: E1105 23:40:38.588562 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:40:38.588945 kubelet[2683]: E1105 23:40:38.588687 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aff672a146af41bb8a3ea782c09d4a4b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hlvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-685d97d67-g62jm_calico-system(013bb512-5088-47f3-94c1-e2243634b474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:38.590935 containerd[1521]: time="2025-11-05T23:40:38.590901057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:40:38.797641 containerd[1521]: time="2025-11-05T23:40:38.797585292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:38.798688 containerd[1521]: time="2025-11-05T23:40:38.798637217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:40:38.798688 containerd[1521]: time="2025-11-05T23:40:38.798668257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:40:38.798906 kubelet[2683]: E1105 23:40:38.798861 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:40:38.798951 kubelet[2683]: E1105 23:40:38.798916 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:40:38.799449 kubelet[2683]: E1105 23:40:38.799381 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hlvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-685d97d67-g62jm_calico-system(013bb512-5088-47f3-94c1-e2243634b474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:38.801148 kubelet[2683]: E1105 23:40:38.801098 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-685d97d67-g62jm" podUID="013bb512-5088-47f3-94c1-e2243634b474" Nov 5 23:40:40.264060 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:44642.service - OpenSSH per-connection server daemon (10.0.0.1:44642). Nov 5 23:40:40.323346 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 44642 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:40.325934 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:40.331024 systemd-logind[1503]: New session 10 of user core. Nov 5 23:40:40.344608 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 23:40:40.547489 sshd[4860]: Connection closed by 10.0.0.1 port 44642 Nov 5 23:40:40.547581 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:40.564417 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:44642.service: Deactivated successfully. Nov 5 23:40:40.566230 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 23:40:40.567063 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Nov 5 23:40:40.569501 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:44658.service - OpenSSH per-connection server daemon (10.0.0.1:44658). Nov 5 23:40:40.571161 systemd-logind[1503]: Removed session 10. Nov 5 23:40:40.642237 sshd[4875]: Accepted publickey for core from 10.0.0.1 port 44658 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:40.644263 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:40.652313 systemd-logind[1503]: New session 11 of user core. Nov 5 23:40:40.661052 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 23:40:40.844726 sshd[4879]: Connection closed by 10.0.0.1 port 44658 Nov 5 23:40:40.845183 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:40.854695 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:44658.service: Deactivated successfully. Nov 5 23:40:40.856820 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 23:40:40.860275 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Nov 5 23:40:40.863209 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:44664.service - OpenSSH per-connection server daemon (10.0.0.1:44664). Nov 5 23:40:40.866284 systemd-logind[1503]: Removed session 11. Nov 5 23:40:40.934229 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 44664 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:40.936046 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:40.941472 systemd-logind[1503]: New session 12 of user core. Nov 5 23:40:40.957675 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 23:40:41.158589 sshd[4893]: Connection closed by 10.0.0.1 port 44664 Nov 5 23:40:41.160174 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:41.166443 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:44664.service: Deactivated successfully. Nov 5 23:40:41.169055 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 23:40:41.170642 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Nov 5 23:40:41.173761 systemd-logind[1503]: Removed session 12. Nov 5 23:40:43.273935 containerd[1521]: time="2025-11-05T23:40:43.273857109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:40:43.965487 containerd[1521]: time="2025-11-05T23:40:43.965439398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:43.966674 containerd[1521]: time="2025-11-05T23:40:43.966629403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:40:43.966739 containerd[1521]: time="2025-11-05T23:40:43.966699363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:40:43.966886 kubelet[2683]: E1105 23:40:43.966844 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:40:43.967205 kubelet[2683]: E1105 23:40:43.966898 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:40:43.967205 kubelet[2683]: E1105 23:40:43.967058 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9bpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d449f5dd-whvqx_calico-system(b316df6c-d77d-45e2-a17b-32be21557bd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:43.968721 kubelet[2683]: E1105 23:40:43.968680 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:40:45.273281 containerd[1521]: time="2025-11-05T23:40:45.273081954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:40:45.485408 containerd[1521]: time="2025-11-05T23:40:45.485347893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:45.486338 containerd[1521]: time="2025-11-05T23:40:45.486308937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:40:45.486524 containerd[1521]: time="2025-11-05T23:40:45.486367017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:40:45.486593 kubelet[2683]: E1105 23:40:45.486516 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:45.486593 kubelet[2683]: E1105 23:40:45.486555 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:45.487009 kubelet[2683]: E1105 23:40:45.486698 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mlzw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b5dd884c4-fbmq2_calico-apiserver(212ccd9f-8d71-41e6-a6d1-799624e298a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:45.487913 kubelet[2683]: E1105 23:40:45.487873 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:40:46.178573 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). Nov 5 23:40:46.240924 sshd[4919]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:46.242275 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:46.246226 systemd-logind[1503]: New session 13 of user core. Nov 5 23:40:46.256610 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 23:40:46.274105 containerd[1521]: time="2025-11-05T23:40:46.274047314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:40:46.380993 sshd[4922]: Connection closed by 10.0.0.1 port 44730 Nov 5 23:40:46.381326 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:46.385362 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:44730.service: Deactivated successfully. Nov 5 23:40:46.388804 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 23:40:46.389860 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Nov 5 23:40:46.391968 systemd-logind[1503]: Removed session 13. Nov 5 23:40:46.482551 containerd[1521]: time="2025-11-05T23:40:46.482220378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:46.483449 containerd[1521]: time="2025-11-05T23:40:46.483362142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:40:46.483579 containerd[1521]: time="2025-11-05T23:40:46.483459022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:40:46.483626 kubelet[2683]: E1105 23:40:46.483588 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:40:46.486238 kubelet[2683]: E1105 23:40:46.483630 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:40:46.486439 kubelet[2683]: E1105 23:40:46.486368 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q466l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wrn7f_calico-system(17800fb7-e812-49ed-8d87-f115a699b129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:46.487614 kubelet[2683]: E1105 23:40:46.487575 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:40:49.274124 containerd[1521]: time="2025-11-05T23:40:49.274067707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:40:49.491606 containerd[1521]: time="2025-11-05T23:40:49.491564068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:49.492622 containerd[1521]: time="2025-11-05T23:40:49.492581472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:40:49.492771 containerd[1521]: time="2025-11-05T23:40:49.492659992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:40:49.492834 kubelet[2683]: E1105 23:40:49.492785 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:40:49.493118 kubelet[2683]: E1105 23:40:49.492844 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:40:49.493118 kubelet[2683]: E1105 23:40:49.492965 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:49.495260 containerd[1521]: time="2025-11-05T23:40:49.495225841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:40:49.693539 containerd[1521]: time="2025-11-05T23:40:49.693211014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:49.694662 containerd[1521]: time="2025-11-05T23:40:49.694616219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:40:49.695015 containerd[1521]: time="2025-11-05T23:40:49.694677579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:40:49.695060 kubelet[2683]: E1105 23:40:49.694809 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:40:49.695060 kubelet[2683]: E1105 23:40:49.694855 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:40:49.695246 kubelet[2683]: E1105 23:40:49.695186 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:49.696440 kubelet[2683]: E1105 23:40:49.696385 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:40:50.273563 containerd[1521]: time="2025-11-05T23:40:50.273368301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:40:50.472096 containerd[1521]: time="2025-11-05T23:40:50.471953899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:40:50.473080 containerd[1521]: time="2025-11-05T23:40:50.473031983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:40:50.473080 containerd[1521]: time="2025-11-05T23:40:50.473069823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:40:50.473339 kubelet[2683]: E1105 23:40:50.473283 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:50.473428 kubelet[2683]: E1105 23:40:50.473340 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:40:50.474036 kubelet[2683]: E1105 23:40:50.473503 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zllw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b5dd884c4-dpl4b_calico-apiserver(25336bb2-bdfb-4d75-81ae-8315253dce4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:40:50.475032 kubelet[2683]: E1105 23:40:50.474986 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b" Nov 5 23:40:51.400959 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:38734.service - OpenSSH per-connection server daemon (10.0.0.1:38734). Nov 5 23:40:51.480374 sshd[4937]: Accepted publickey for core from 10.0.0.1 port 38734 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:51.482296 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:51.487496 systemd-logind[1503]: New session 14 of user core. Nov 5 23:40:51.495645 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 23:40:51.641913 sshd[4940]: Connection closed by 10.0.0.1 port 38734 Nov 5 23:40:51.642248 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:51.646210 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:38734.service: Deactivated successfully. Nov 5 23:40:51.648727 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 23:40:51.650299 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Nov 5 23:40:51.651918 systemd-logind[1503]: Removed session 14. Nov 5 23:40:54.273407 kubelet[2683]: E1105 23:40:54.273314 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-685d97d67-g62jm" podUID="013bb512-5088-47f3-94c1-e2243634b474" Nov 5 23:40:55.273421 kubelet[2683]: E1105 23:40:55.273273 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:40:55.536195 containerd[1521]: time="2025-11-05T23:40:55.535999121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9cc2e4b3eb5dbd5632dd503009f14bcfae080b2528a5afb46eca8ce6aceca83\" id:\"50302f4ac842c651ded5dc3335009c226c55297c954eb4375021d59922d18f1c\" pid:4969 exited_at:{seconds:1762386055 nanos:535340079}" Nov 5 23:40:56.659331 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Nov 5 23:40:56.726919 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:40:56.728529 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:40:56.733454 systemd-logind[1503]: New session 15 of user core. Nov 5 23:40:56.745808 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 23:40:56.960499 sshd[4989]: Connection closed by 10.0.0.1 port 38812 Nov 5 23:40:56.960302 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Nov 5 23:40:56.964821 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Nov 5 23:40:56.964876 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 23:40:56.965628 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:38812.service: Deactivated successfully. Nov 5 23:40:56.968302 systemd-logind[1503]: Removed session 15. Nov 5 23:40:58.275700 kubelet[2683]: E1105 23:40:58.275638 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:40:59.276301 kubelet[2683]: E1105 23:40:59.276257 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:41:01.978808 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:53714.service - OpenSSH per-connection server daemon (10.0.0.1:53714). Nov 5 23:41:02.047915 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 53714 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:02.049484 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:02.053710 systemd-logind[1503]: New session 16 of user core. Nov 5 23:41:02.064605 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 23:41:02.265943 sshd[5008]: Connection closed by 10.0.0.1 port 53714 Nov 5 23:41:02.266515 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:02.274697 kubelet[2683]: E1105 23:41:02.273832 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:41:02.277422 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:53714.service: Deactivated successfully. Nov 5 23:41:02.279271 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 23:41:02.280263 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Nov 5 23:41:02.284909 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:53726.service - OpenSSH per-connection server daemon (10.0.0.1:53726). Nov 5 23:41:02.286834 systemd-logind[1503]: Removed session 16. Nov 5 23:41:02.346995 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 53726 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:02.349542 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:02.355164 systemd-logind[1503]: New session 17 of user core. Nov 5 23:41:02.363714 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 23:41:02.599854 sshd[5024]: Connection closed by 10.0.0.1 port 53726 Nov 5 23:41:02.600786 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:02.610638 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:53726.service: Deactivated successfully. Nov 5 23:41:02.612761 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 23:41:02.614658 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Nov 5 23:41:02.620718 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:53734.service - OpenSSH per-connection server daemon (10.0.0.1:53734). Nov 5 23:41:02.623477 systemd-logind[1503]: Removed session 17. Nov 5 23:41:02.689185 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 53734 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:02.691127 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:02.698689 systemd-logind[1503]: New session 18 of user core. Nov 5 23:41:02.711795 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 23:41:03.437539 sshd[5039]: Connection closed by 10.0.0.1 port 53734 Nov 5 23:41:03.437966 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:03.447101 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:53734.service: Deactivated successfully. Nov 5 23:41:03.448839 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 23:41:03.450841 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Nov 5 23:41:03.454623 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:53750.service - OpenSSH per-connection server daemon (10.0.0.1:53750). Nov 5 23:41:03.457097 systemd-logind[1503]: Removed session 18. Nov 5 23:41:03.512444 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 53750 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:03.513839 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:03.518461 systemd-logind[1503]: New session 19 of user core. Nov 5 23:41:03.527741 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 23:41:03.975566 sshd[5063]: Connection closed by 10.0.0.1 port 53750 Nov 5 23:41:03.976615 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:03.986079 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:53750.service: Deactivated successfully. Nov 5 23:41:03.990891 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 23:41:03.992728 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Nov 5 23:41:03.995871 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Nov 5 23:41:03.999048 systemd-logind[1503]: Removed session 19. Nov 5 23:41:04.062481 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:04.064519 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:04.069553 systemd-logind[1503]: New session 20 of user core. Nov 5 23:41:04.081633 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 23:41:04.220488 sshd[5077]: Connection closed by 10.0.0.1 port 53764 Nov 5 23:41:04.220869 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:04.226463 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:53764.service: Deactivated successfully. Nov 5 23:41:04.228567 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 23:41:04.230640 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Nov 5 23:41:04.232187 systemd-logind[1503]: Removed session 20. Nov 5 23:41:05.276424 kubelet[2683]: E1105 23:41:05.276363 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b" Nov 5 23:41:06.272736 containerd[1521]: time="2025-11-05T23:41:06.272692198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:41:06.498671 containerd[1521]: time="2025-11-05T23:41:06.498576695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:06.499584 containerd[1521]: time="2025-11-05T23:41:06.499541378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:41:06.499687 containerd[1521]: time="2025-11-05T23:41:06.499641978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:41:06.499840 kubelet[2683]: E1105 23:41:06.499795 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:41:06.500142 kubelet[2683]: E1105 23:41:06.499854 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:41:06.500142 kubelet[2683]: E1105 23:41:06.499989 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aff672a146af41bb8a3ea782c09d4a4b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hlvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-685d97d67-g62jm_calico-system(013bb512-5088-47f3-94c1-e2243634b474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:06.503360 containerd[1521]: time="2025-11-05T23:41:06.503333467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:41:06.820720 containerd[1521]: time="2025-11-05T23:41:06.820661862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:06.834559 containerd[1521]: time="2025-11-05T23:41:06.834486815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:41:06.835355 containerd[1521]: time="2025-11-05T23:41:06.834499535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:41:06.835439 kubelet[2683]: E1105 23:41:06.834772 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:41:06.835439 kubelet[2683]: E1105 23:41:06.834819 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:41:06.835439 kubelet[2683]: E1105 23:41:06.834936 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hlvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-685d97d67-g62jm_calico-system(013bb512-5088-47f3-94c1-e2243634b474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:06.836189 kubelet[2683]: E1105 23:41:06.836140 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-685d97d67-g62jm" podUID="013bb512-5088-47f3-94c1-e2243634b474" Nov 5 23:41:09.233300 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:56448.service - OpenSSH per-connection server daemon (10.0.0.1:56448). Nov 5 23:41:09.290587 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 56448 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:09.291945 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:09.297109 systemd-logind[1503]: New session 21 of user core. Nov 5 23:41:09.306639 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 23:41:09.433956 sshd[5101]: Connection closed by 10.0.0.1 port 56448 Nov 5 23:41:09.434941 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:09.439677 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:56448.service: Deactivated successfully. Nov 5 23:41:09.441505 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 23:41:09.442332 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Nov 5 23:41:09.444566 systemd-logind[1503]: Removed session 21. Nov 5 23:41:10.274351 containerd[1521]: time="2025-11-05T23:41:10.274310423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:41:10.503165 containerd[1521]: time="2025-11-05T23:41:10.502992444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:10.504026 containerd[1521]: time="2025-11-05T23:41:10.503987527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:41:10.504185 containerd[1521]: time="2025-11-05T23:41:10.504022127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:41:10.504412 kubelet[2683]: E1105 23:41:10.504337 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:41:10.504715 kubelet[2683]: E1105 23:41:10.504420 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:41:10.504715 kubelet[2683]: E1105 23:41:10.504570 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9bpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d449f5dd-whvqx_calico-system(b316df6c-d77d-45e2-a17b-32be21557bd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:10.506042 kubelet[2683]: E1105 23:41:10.506003 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d449f5dd-whvqx" podUID="b316df6c-d77d-45e2-a17b-32be21557bd5" Nov 5 23:41:11.277119 containerd[1521]: time="2025-11-05T23:41:11.277030610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:41:11.491145 containerd[1521]: time="2025-11-05T23:41:11.491084590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:11.492301 containerd[1521]: time="2025-11-05T23:41:11.492247313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:41:11.492366 containerd[1521]: time="2025-11-05T23:41:11.492334713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:41:11.492741 kubelet[2683]: E1105 23:41:11.492500 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:41:11.492741 kubelet[2683]: E1105 23:41:11.492562 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:41:11.492741 kubelet[2683]: E1105 23:41:11.492691 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mlzw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b5dd884c4-fbmq2_calico-apiserver(212ccd9f-8d71-41e6-a6d1-799624e298a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:11.493916 kubelet[2683]: E1105 23:41:11.493879 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-fbmq2" podUID="212ccd9f-8d71-41e6-a6d1-799624e298a9" Nov 5 23:41:13.276794 containerd[1521]: time="2025-11-05T23:41:13.276752566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:41:13.500272 containerd[1521]: time="2025-11-05T23:41:13.500226988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:13.501344 containerd[1521]: time="2025-11-05T23:41:13.501305910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:41:13.501386 containerd[1521]: time="2025-11-05T23:41:13.501362110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:41:13.502596 kubelet[2683]: E1105 23:41:13.502549 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:41:13.503071 kubelet[2683]: E1105 23:41:13.502608 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:41:13.503071 kubelet[2683]: E1105 23:41:13.502767 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q466l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wrn7f_calico-system(17800fb7-e812-49ed-8d87-f115a699b129): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:13.504574 kubelet[2683]: E1105 23:41:13.504515 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wrn7f" podUID="17800fb7-e812-49ed-8d87-f115a699b129" Nov 5 23:41:14.447166 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:56454.service - OpenSSH per-connection server daemon (10.0.0.1:56454). Nov 5 23:41:14.507114 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 56454 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:14.508508 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:14.512626 systemd-logind[1503]: New session 22 of user core. Nov 5 23:41:14.527828 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 23:41:14.674716 sshd[5120]: Connection closed by 10.0.0.1 port 56454 Nov 5 23:41:14.675100 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:14.679802 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:56454.service: Deactivated successfully. Nov 5 23:41:14.682096 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 23:41:14.682948 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Nov 5 23:41:14.684301 systemd-logind[1503]: Removed session 22. Nov 5 23:41:17.273855 containerd[1521]: time="2025-11-05T23:41:17.273479321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:41:17.489622 containerd[1521]: time="2025-11-05T23:41:17.489567895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:17.490682 containerd[1521]: time="2025-11-05T23:41:17.490625897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:41:17.490754 containerd[1521]: time="2025-11-05T23:41:17.490707938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:41:17.491465 kubelet[2683]: E1105 23:41:17.491415 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:41:17.491788 kubelet[2683]: E1105 23:41:17.491476 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:41:17.491788 kubelet[2683]: E1105 23:41:17.491662 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:17.493680 containerd[1521]: time="2025-11-05T23:41:17.493639703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:41:17.699765 containerd[1521]: time="2025-11-05T23:41:17.699650098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:17.700772 containerd[1521]: time="2025-11-05T23:41:17.700682980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:41:17.700772 containerd[1521]: time="2025-11-05T23:41:17.700696940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:41:17.700972 kubelet[2683]: E1105 23:41:17.700893 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:41:17.700972 kubelet[2683]: E1105 23:41:17.700965 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:41:17.701480 kubelet[2683]: E1105 23:41:17.701092 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kzcdk_calico-system(99b6a710-372d-4a19-969a-3a96da2ba20c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:17.702673 kubelet[2683]: E1105 23:41:17.702629 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzcdk" podUID="99b6a710-372d-4a19-969a-3a96da2ba20c" Nov 5 23:41:19.688634 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:58272.service - OpenSSH per-connection server daemon (10.0.0.1:58272). Nov 5 23:41:19.735310 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 58272 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:41:19.736883 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:41:19.744172 systemd-logind[1503]: New session 23 of user core. Nov 5 23:41:19.749552 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 23:41:19.887016 sshd[5140]: Connection closed by 10.0.0.1 port 58272 Nov 5 23:41:19.887310 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Nov 5 23:41:19.891487 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:58272.service: Deactivated successfully. Nov 5 23:41:19.893304 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 23:41:19.894370 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Nov 5 23:41:19.895247 systemd-logind[1503]: Removed session 23. Nov 5 23:41:20.273230 containerd[1521]: time="2025-11-05T23:41:20.273186547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:41:20.508417 containerd[1521]: time="2025-11-05T23:41:20.508348335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:41:20.509273 containerd[1521]: time="2025-11-05T23:41:20.509224496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:41:20.509312 containerd[1521]: time="2025-11-05T23:41:20.509278816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:41:20.509572 kubelet[2683]: E1105 23:41:20.509520 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:41:20.510115 kubelet[2683]: E1105 23:41:20.509902 2683 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:41:20.510247 kubelet[2683]: E1105 23:41:20.510077 2683 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zllw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b5dd884c4-dpl4b_calico-apiserver(25336bb2-bdfb-4d75-81ae-8315253dce4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:41:20.511349 kubelet[2683]: E1105 23:41:20.511303 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b5dd884c4-dpl4b" podUID="25336bb2-bdfb-4d75-81ae-8315253dce4b"