Oct 27 23:39:06.767218 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 27 23:39:06.767242 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Oct 27 22:06:39 -00 2025 Oct 27 23:39:06.767251 kernel: KASLR enabled Oct 27 23:39:06.767257 kernel: efi: EFI v2.7 by EDK II Oct 27 23:39:06.767262 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 27 23:39:06.767267 kernel: random: crng init done Oct 27 23:39:06.767274 kernel: secureboot: Secure boot disabled Oct 27 23:39:06.767280 kernel: ACPI: Early table checksum verification disabled Oct 27 23:39:06.767286 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 27 23:39:06.767293 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 27 23:39:06.767299 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767304 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767310 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767316 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767323 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767330 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767336 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767342 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767348 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:39:06.767354 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 27 23:39:06.767360 kernel: ACPI: Use ACPI SPCR as default console: No Oct 27 23:39:06.767366 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:39:06.767393 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 27 23:39:06.767400 kernel: Zone ranges: Oct 27 23:39:06.767406 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:39:06.767418 kernel: DMA32 empty Oct 27 23:39:06.767425 kernel: Normal empty Oct 27 23:39:06.767430 kernel: Device empty Oct 27 23:39:06.767437 kernel: Movable zone start for each node Oct 27 23:39:06.767444 kernel: Early memory node ranges Oct 27 23:39:06.767450 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 27 23:39:06.767456 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 27 23:39:06.767462 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 27 23:39:06.767468 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 27 23:39:06.767475 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 27 23:39:06.767481 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 27 23:39:06.767487 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 27 23:39:06.767494 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 27 23:39:06.767501 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 27 23:39:06.767507 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 27 23:39:06.767516 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 27 23:39:06.767523 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 27 23:39:06.767530 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 27 23:39:06.767537 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:39:06.767544 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 27 23:39:06.767551 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 27 23:39:06.767558 kernel: psci: probing for conduit method from ACPI. Oct 27 23:39:06.767564 kernel: psci: PSCIv1.1 detected in firmware. Oct 27 23:39:06.767571 kernel: psci: Using standard PSCI v0.2 function IDs Oct 27 23:39:06.767577 kernel: psci: Trusted OS migration not required Oct 27 23:39:06.767584 kernel: psci: SMC Calling Convention v1.1 Oct 27 23:39:06.767591 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 27 23:39:06.767597 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 27 23:39:06.767606 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 27 23:39:06.767612 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 27 23:39:06.767619 kernel: Detected PIPT I-cache on CPU0 Oct 27 23:39:06.767625 kernel: CPU features: detected: GIC system register CPU interface Oct 27 23:39:06.767631 kernel: CPU features: detected: Spectre-v4 Oct 27 23:39:06.767638 kernel: CPU features: detected: Spectre-BHB Oct 27 23:39:06.767644 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 27 23:39:06.767651 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 27 23:39:06.767657 kernel: CPU features: detected: ARM erratum 1418040 Oct 27 23:39:06.767663 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 27 23:39:06.767670 kernel: alternatives: applying boot alternatives Oct 27 23:39:06.767677 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7da44627248fe1fbee2c83c4ccd30b78ae5d30059ff898a840de6b6417372b60 Oct 27 23:39:06.767685 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 23:39:06.767692 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 23:39:06.767698 kernel: Fallback order for Node 0: 0 Oct 27 23:39:06.767705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 27 23:39:06.767711 kernel: Policy zone: DMA Oct 27 23:39:06.767717 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 23:39:06.767724 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 27 23:39:06.767730 kernel: software IO TLB: area num 4. Oct 27 23:39:06.767736 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 27 23:39:06.767743 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 27 23:39:06.767757 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 23:39:06.767765 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 23:39:06.767772 kernel: rcu: RCU event tracing is enabled. Oct 27 23:39:06.767779 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 23:39:06.767785 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 23:39:06.767791 kernel: Tracing variant of Tasks RCU enabled. Oct 27 23:39:06.767798 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 23:39:06.767804 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 23:39:06.767811 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 23:39:06.767817 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 23:39:06.767824 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 27 23:39:06.767830 kernel: GICv3: 256 SPIs implemented Oct 27 23:39:06.767837 kernel: GICv3: 0 Extended SPIs implemented Oct 27 23:39:06.767844 kernel: Root IRQ handler: gic_handle_irq Oct 27 23:39:06.767850 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 27 23:39:06.767856 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 27 23:39:06.767862 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 27 23:39:06.767869 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 27 23:39:06.767875 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 27 23:39:06.767882 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 27 23:39:06.767889 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 27 23:39:06.767895 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 27 23:39:06.767901 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 23:39:06.767908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:39:06.767915 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 27 23:39:06.767922 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 27 23:39:06.767928 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 27 23:39:06.767934 kernel: arm-pv: using stolen time PV Oct 27 23:39:06.767941 kernel: Console: colour dummy device 80x25 Oct 27 23:39:06.767948 kernel: ACPI: Core revision 20240827 Oct 27 23:39:06.767954 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 27 23:39:06.767961 kernel: pid_max: default: 32768 minimum: 301 Oct 27 23:39:06.767967 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 23:39:06.767974 kernel: landlock: Up and running. Oct 27 23:39:06.767982 kernel: SELinux: Initializing. Oct 27 23:39:06.767988 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 23:39:06.767995 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 23:39:06.768001 kernel: rcu: Hierarchical SRCU implementation. Oct 27 23:39:06.768008 kernel: rcu: Max phase no-delay instances is 400. Oct 27 23:39:06.768032 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 23:39:06.768039 kernel: Remapping and enabling EFI services. Oct 27 23:39:06.768046 kernel: smp: Bringing up secondary CPUs ... Oct 27 23:39:06.768052 kernel: Detected PIPT I-cache on CPU1 Oct 27 23:39:06.768079 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 27 23:39:06.768086 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 27 23:39:06.768093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:39:06.768101 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 27 23:39:06.768108 kernel: Detected PIPT I-cache on CPU2 Oct 27 23:39:06.768115 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 27 23:39:06.768123 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 27 23:39:06.768130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:39:06.768138 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 27 23:39:06.768145 kernel: Detected PIPT I-cache on CPU3 Oct 27 23:39:06.768152 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 27 23:39:06.768159 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 27 23:39:06.768166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:39:06.768173 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 27 23:39:06.768180 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 23:39:06.768187 kernel: SMP: Total of 4 processors activated. Oct 27 23:39:06.768193 kernel: CPU: All CPU(s) started at EL1 Oct 27 23:39:06.768201 kernel: CPU features: detected: 32-bit EL0 Support Oct 27 23:39:06.768209 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 27 23:39:06.768215 kernel: CPU features: detected: Common not Private translations Oct 27 23:39:06.768222 kernel: CPU features: detected: CRC32 instructions Oct 27 23:39:06.768229 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 27 23:39:06.768236 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 27 23:39:06.768243 kernel: CPU features: detected: LSE atomic instructions Oct 27 23:39:06.768250 kernel: CPU features: detected: Privileged Access Never Oct 27 23:39:06.768257 kernel: CPU features: detected: RAS Extension Support Oct 27 23:39:06.768265 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 27 23:39:06.768272 kernel: alternatives: applying system-wide alternatives Oct 27 23:39:06.768279 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 27 23:39:06.768286 kernel: Memory: 2424416K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125536K reserved, 16384K cma-reserved) Oct 27 23:39:06.768293 kernel: devtmpfs: initialized Oct 27 23:39:06.768301 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 23:39:06.768308 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 23:39:06.768314 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 27 23:39:06.768321 kernel: 0 pages in range for non-PLT usage Oct 27 23:39:06.768330 kernel: 508560 pages in range for PLT usage Oct 27 23:39:06.768337 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 23:39:06.768344 kernel: SMBIOS 3.0.0 present. Oct 27 23:39:06.768350 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 27 23:39:06.768358 kernel: DMI: Memory slots populated: 1/1 Oct 27 23:39:06.768365 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 23:39:06.768372 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 27 23:39:06.768379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 27 23:39:06.768386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 27 23:39:06.768395 kernel: audit: initializing netlink subsys (disabled) Oct 27 23:39:06.768402 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Oct 27 23:39:06.768409 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 23:39:06.768416 kernel: cpuidle: using governor menu Oct 27 23:39:06.768423 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 27 23:39:06.768430 kernel: ASID allocator initialised with 32768 entries Oct 27 23:39:06.768437 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 23:39:06.768444 kernel: Serial: AMBA PL011 UART driver Oct 27 23:39:06.768450 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 23:39:06.768458 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 23:39:06.768465 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 27 23:39:06.768472 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 27 23:39:06.768479 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 23:39:06.768486 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 23:39:06.768493 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 27 23:39:06.768500 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 27 23:39:06.768507 kernel: ACPI: Added _OSI(Module Device) Oct 27 23:39:06.768514 kernel: ACPI: Added _OSI(Processor Device) Oct 27 23:39:06.768523 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 23:39:06.768530 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 23:39:06.768538 kernel: ACPI: Interpreter enabled Oct 27 23:39:06.768545 kernel: ACPI: Using GIC for interrupt routing Oct 27 23:39:06.768552 kernel: ACPI: MCFG table detected, 1 entries Oct 27 23:39:06.768559 kernel: ACPI: CPU0 has been hot-added Oct 27 23:39:06.768566 kernel: ACPI: CPU1 has been hot-added Oct 27 23:39:06.768573 kernel: ACPI: CPU2 has been hot-added Oct 27 23:39:06.768581 kernel: ACPI: CPU3 has been hot-added Oct 27 23:39:06.768588 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 27 23:39:06.768596 kernel: printk: legacy console [ttyAMA0] enabled Oct 27 23:39:06.768603 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 23:39:06.768735 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 23:39:06.768817 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 27 23:39:06.768877 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 27 23:39:06.768934 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 27 23:39:06.768991 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 27 23:39:06.769003 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 27 23:39:06.769021 kernel: PCI host bridge to bus 0000:00 Oct 27 23:39:06.769092 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 27 23:39:06.769150 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 27 23:39:06.769204 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 27 23:39:06.769257 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 23:39:06.769342 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 27 23:39:06.769417 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 23:39:06.769491 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 27 23:39:06.769552 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 27 23:39:06.769614 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 27 23:39:06.769679 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 27 23:39:06.769741 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 27 23:39:06.769813 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 27 23:39:06.769868 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 27 23:39:06.769922 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 27 23:39:06.769977 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 27 23:39:06.769986 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 27 23:39:06.769993 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 27 23:39:06.770000 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 27 23:39:06.770007 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 27 23:39:06.770026 kernel: iommu: Default domain type: Translated Oct 27 23:39:06.770034 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 27 23:39:06.770041 kernel: efivars: Registered efivars operations Oct 27 23:39:06.770048 kernel: vgaarb: loaded Oct 27 23:39:06.770055 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 27 23:39:06.770062 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 23:39:06.770069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 23:39:06.770076 kernel: pnp: PnP ACPI init Oct 27 23:39:06.770144 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 27 23:39:06.770156 kernel: pnp: PnP ACPI: found 1 devices Oct 27 23:39:06.770163 kernel: NET: Registered PF_INET protocol family Oct 27 23:39:06.770170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 23:39:06.770177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 23:39:06.770184 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 23:39:06.770191 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 23:39:06.770198 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 23:39:06.770205 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 23:39:06.770213 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 23:39:06.770220 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 23:39:06.770227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 23:39:06.770234 kernel: PCI: CLS 0 bytes, default 64 Oct 27 23:39:06.770241 kernel: kvm [1]: HYP mode not available Oct 27 23:39:06.770248 kernel: Initialise system trusted keyrings Oct 27 23:39:06.770255 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 23:39:06.770262 kernel: Key type asymmetric registered Oct 27 23:39:06.770269 kernel: Asymmetric key parser 'x509' registered Oct 27 23:39:06.770277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 27 23:39:06.770284 kernel: io scheduler mq-deadline registered Oct 27 23:39:06.770291 kernel: io scheduler kyber registered Oct 27 23:39:06.770298 kernel: io scheduler bfq registered Oct 27 23:39:06.770305 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 27 23:39:06.770312 kernel: ACPI: button: Power Button [PWRB] Oct 27 23:39:06.770319 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 27 23:39:06.770385 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 27 23:39:06.770394 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 23:39:06.770403 kernel: thunder_xcv, ver 1.0 Oct 27 23:39:06.770410 kernel: thunder_bgx, ver 1.0 Oct 27 23:39:06.770417 kernel: nicpf, ver 1.0 Oct 27 23:39:06.770424 kernel: nicvf, ver 1.0 Oct 27 23:39:06.770491 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 27 23:39:06.770549 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-27T23:39:06 UTC (1761608346) Oct 27 23:39:06.770558 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 27 23:39:06.770565 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 27 23:39:06.770574 kernel: watchdog: NMI not fully supported Oct 27 23:39:06.770581 kernel: watchdog: Hard watchdog permanently disabled Oct 27 23:39:06.770589 kernel: NET: Registered PF_INET6 protocol family Oct 27 23:39:06.770596 kernel: Segment Routing with IPv6 Oct 27 23:39:06.770603 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 23:39:06.770610 kernel: NET: Registered PF_PACKET protocol family Oct 27 23:39:06.770616 kernel: Key type dns_resolver registered Oct 27 23:39:06.770623 kernel: registered taskstats version 1 Oct 27 23:39:06.770630 kernel: Loading compiled-in X.509 certificates Oct 27 23:39:06.770638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d36d3f99f7c8356b27e0c5530c216cd6f7ab4d7e' Oct 27 23:39:06.770647 kernel: Demotion targets for Node 0: null Oct 27 23:39:06.770654 kernel: Key type .fscrypt registered Oct 27 23:39:06.770661 kernel: Key type fscrypt-provisioning registered Oct 27 23:39:06.770668 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 23:39:06.770675 kernel: ima: Allocated hash algorithm: sha1 Oct 27 23:39:06.770682 kernel: ima: No architecture policies found Oct 27 23:39:06.770689 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 27 23:39:06.770696 kernel: clk: Disabling unused clocks Oct 27 23:39:06.770703 kernel: PM: genpd: Disabling unused power domains Oct 27 23:39:06.770711 kernel: Warning: unable to open an initial console. Oct 27 23:39:06.770718 kernel: Freeing unused kernel memory: 38976K Oct 27 23:39:06.770725 kernel: Run /init as init process Oct 27 23:39:06.770732 kernel: with arguments: Oct 27 23:39:06.770739 kernel: /init Oct 27 23:39:06.770754 kernel: with environment: Oct 27 23:39:06.770761 kernel: HOME=/ Oct 27 23:39:06.770768 kernel: TERM=linux Oct 27 23:39:06.770776 systemd[1]: Successfully made /usr/ read-only. Oct 27 23:39:06.770788 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 23:39:06.770796 systemd[1]: Detected virtualization kvm. Oct 27 23:39:06.770803 systemd[1]: Detected architecture arm64. Oct 27 23:39:06.770814 systemd[1]: Running in initrd. Oct 27 23:39:06.770821 systemd[1]: No hostname configured, using default hostname. Oct 27 23:39:06.770829 systemd[1]: Hostname set to . Oct 27 23:39:06.770836 systemd[1]: Initializing machine ID from VM UUID. Oct 27 23:39:06.770846 systemd[1]: Queued start job for default target initrd.target. Oct 27 23:39:06.770853 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:39:06.770861 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:39:06.770869 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 23:39:06.770879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 23:39:06.770887 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 23:39:06.770895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 23:39:06.770907 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 27 23:39:06.770915 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 27 23:39:06.770926 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:39:06.770934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:39:06.770941 systemd[1]: Reached target paths.target - Path Units. Oct 27 23:39:06.770949 systemd[1]: Reached target slices.target - Slice Units. Oct 27 23:39:06.770956 systemd[1]: Reached target swap.target - Swaps. Oct 27 23:39:06.770964 systemd[1]: Reached target timers.target - Timer Units. Oct 27 23:39:06.770973 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 23:39:06.770982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 23:39:06.770990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 23:39:06.770997 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 23:39:06.771005 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:39:06.771020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 23:39:06.771029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:39:06.771038 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 23:39:06.771047 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 23:39:06.771056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 23:39:06.771068 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 23:39:06.771079 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 23:39:06.771087 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 23:39:06.771094 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 23:39:06.771102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 23:39:06.771109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:39:06.771116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:39:06.771126 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 23:39:06.771133 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 23:39:06.771158 systemd-journald[243]: Collecting audit messages is disabled. Oct 27 23:39:06.771178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 23:39:06.771187 systemd-journald[243]: Journal started Oct 27 23:39:06.771205 systemd-journald[243]: Runtime Journal (/run/log/journal/53641e465f2e4243a41dbd341d9c79aa) is 6M, max 48.5M, 42.4M free. Oct 27 23:39:06.777135 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 23:39:06.777183 kernel: Bridge firewalling registered Oct 27 23:39:06.777194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:39:06.760472 systemd-modules-load[245]: Inserted module 'overlay' Oct 27 23:39:06.774986 systemd-modules-load[245]: Inserted module 'br_netfilter' Oct 27 23:39:06.783235 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 23:39:06.783751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 23:39:06.785161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 23:39:06.790083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:39:06.791824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:39:06.794087 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 23:39:06.806841 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 23:39:06.814359 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 23:39:06.814651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:39:06.817615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:39:06.821532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:39:06.824565 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 23:39:06.828074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:39:06.841663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 23:39:06.856789 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7da44627248fe1fbee2c83c4ccd30b78ae5d30059ff898a840de6b6417372b60 Oct 27 23:39:06.876372 systemd-resolved[287]: Positive Trust Anchors: Oct 27 23:39:06.876392 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 23:39:06.876422 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 23:39:06.885340 systemd-resolved[287]: Defaulting to hostname 'linux'. Oct 27 23:39:06.886665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 23:39:06.888376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:39:06.934055 kernel: SCSI subsystem initialized Oct 27 23:39:06.940063 kernel: Loading iSCSI transport class v2.0-870. Oct 27 23:39:06.949054 kernel: iscsi: registered transport (tcp) Oct 27 23:39:06.962094 kernel: iscsi: registered transport (qla4xxx) Oct 27 23:39:06.962150 kernel: QLogic iSCSI HBA Driver Oct 27 23:39:06.978476 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 23:39:07.000761 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 23:39:07.003097 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 23:39:07.049639 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 23:39:07.052086 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 23:39:07.116047 kernel: raid6: neonx8 gen() 15481 MB/s Oct 27 23:39:07.133045 kernel: raid6: neonx4 gen() 15177 MB/s Oct 27 23:39:07.150040 kernel: raid6: neonx2 gen() 12987 MB/s Oct 27 23:39:07.167029 kernel: raid6: neonx1 gen() 10349 MB/s Oct 27 23:39:07.184039 kernel: raid6: int64x8 gen() 6615 MB/s Oct 27 23:39:07.201040 kernel: raid6: int64x4 gen() 7066 MB/s Oct 27 23:39:07.218041 kernel: raid6: int64x2 gen() 5973 MB/s Oct 27 23:39:07.235343 kernel: raid6: int64x1 gen() 4820 MB/s Oct 27 23:39:07.235361 kernel: raid6: using algorithm neonx8 gen() 15481 MB/s Oct 27 23:39:07.253393 kernel: raid6: .... xor() 11997 MB/s, rmw enabled Oct 27 23:39:07.253413 kernel: raid6: using neon recovery algorithm Oct 27 23:39:07.259036 kernel: xor: measuring software checksum speed Oct 27 23:39:07.259064 kernel: 8regs : 21647 MB/sec Oct 27 23:39:07.259074 kernel: 32regs : 19155 MB/sec Oct 27 23:39:07.260177 kernel: arm64_neon : 28138 MB/sec Oct 27 23:39:07.260192 kernel: xor: using function: arm64_neon (28138 MB/sec) Oct 27 23:39:07.314039 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 23:39:07.320618 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 23:39:07.325419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:39:07.353867 systemd-udevd[501]: Using default interface naming scheme 'v255'. Oct 27 23:39:07.358036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:39:07.362113 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 23:39:07.392002 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Oct 27 23:39:07.417105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 23:39:07.419627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 23:39:07.477375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:39:07.480105 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 23:39:07.528041 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 27 23:39:07.532555 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 27 23:39:07.537814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 23:39:07.542160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:39:07.549866 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 23:39:07.549900 kernel: GPT:9289727 != 19775487 Oct 27 23:39:07.549911 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 23:39:07.549921 kernel: GPT:9289727 != 19775487 Oct 27 23:39:07.549936 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 23:39:07.549945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:39:07.549663 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:39:07.551784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:39:07.587234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 23:39:07.590186 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 23:39:07.591642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:39:07.605807 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 23:39:07.613809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 23:39:07.620468 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 23:39:07.621942 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 27 23:39:07.624622 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 23:39:07.628076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:39:07.630484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 23:39:07.633605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 23:39:07.635699 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 23:39:07.655353 disk-uuid[593]: Primary Header is updated. Oct 27 23:39:07.655353 disk-uuid[593]: Secondary Entries is updated. Oct 27 23:39:07.655353 disk-uuid[593]: Secondary Header is updated. Oct 27 23:39:07.663443 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:39:07.655974 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 23:39:08.669044 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:39:08.670733 disk-uuid[599]: The operation has completed successfully. Oct 27 23:39:08.715029 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 23:39:08.715133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 23:39:08.731386 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 27 23:39:08.770447 sh[612]: Success Oct 27 23:39:08.787750 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 23:39:08.787816 kernel: device-mapper: uevent: version 1.0.3 Oct 27 23:39:08.787828 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 23:39:08.797037 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 27 23:39:08.828103 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 27 23:39:08.830777 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 27 23:39:08.846079 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 27 23:39:08.860041 kernel: BTRFS: device fsid 5a6ca053-244f-4cbd-93f9-9b9e55af9b0a devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (625) Oct 27 23:39:08.862052 kernel: BTRFS info (device dm-0): first mount of filesystem 5a6ca053-244f-4cbd-93f9-9b9e55af9b0a Oct 27 23:39:08.862099 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:39:08.867755 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 23:39:08.867805 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 23:39:08.868973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 27 23:39:08.870421 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 23:39:08.872524 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 23:39:08.873359 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 23:39:08.875645 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 23:39:08.905124 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Oct 27 23:39:08.905178 kernel: BTRFS info (device vda6): first mount of filesystem 1d73b0f7-269f-44d4-928d-157506a9bf3d Oct 27 23:39:08.905189 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:39:08.910451 kernel: BTRFS info (device vda6): turning on async discard Oct 27 23:39:08.910505 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 23:39:08.915025 kernel: BTRFS info (device vda6): last unmount of filesystem 1d73b0f7-269f-44d4-928d-157506a9bf3d Oct 27 23:39:08.915730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 23:39:08.918293 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 23:39:08.992033 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 23:39:08.997211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 23:39:09.036387 ignition[701]: Ignition 2.22.0 Oct 27 23:39:09.036402 ignition[701]: Stage: fetch-offline Oct 27 23:39:09.036439 ignition[701]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:39:09.036447 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:39:09.036541 ignition[701]: parsed url from cmdline: "" Oct 27 23:39:09.036544 ignition[701]: no config URL provided Oct 27 23:39:09.036549 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 23:39:09.041487 systemd-networkd[803]: lo: Link UP Oct 27 23:39:09.036557 ignition[701]: no config at "/usr/lib/ignition/user.ign" Oct 27 23:39:09.041490 systemd-networkd[803]: lo: Gained carrier Oct 27 23:39:09.036577 ignition[701]: op(1): [started] loading QEMU firmware config module Oct 27 23:39:09.042274 systemd-networkd[803]: Enumeration completed Oct 27 23:39:09.036581 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 23:39:09.042397 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 23:39:09.047490 ignition[701]: op(1): [finished] loading QEMU firmware config module Oct 27 23:39:09.042656 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:39:09.042659 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 23:39:09.043405 systemd-networkd[803]: eth0: Link UP Oct 27 23:39:09.043551 systemd-networkd[803]: eth0: Gained carrier Oct 27 23:39:09.043560 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:39:09.045679 systemd[1]: Reached target network.target - Network. Oct 27 23:39:09.075065 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 23:39:09.104571 ignition[701]: parsing config with SHA512: 39e2818f00dd5c5e043f5265ecb8d9f0756eea07decd3bb22d28d8f95ab9eec0c725354b0cf2bcaae2de15a62107fadffcd72d15837d267d23924ba50252ffc0 Oct 27 23:39:09.110305 unknown[701]: fetched base config from "system" Oct 27 23:39:09.110317 unknown[701]: fetched user config from "qemu" Oct 27 23:39:09.110688 ignition[701]: fetch-offline: fetch-offline passed Oct 27 23:39:09.112642 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 23:39:09.110754 ignition[701]: Ignition finished successfully Oct 27 23:39:09.114249 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 23:39:09.115118 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 23:39:09.142481 ignition[814]: Ignition 2.22.0 Oct 27 23:39:09.142494 ignition[814]: Stage: kargs Oct 27 23:39:09.142639 ignition[814]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:39:09.142648 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:39:09.145606 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 23:39:09.143402 ignition[814]: kargs: kargs passed Oct 27 23:39:09.143448 ignition[814]: Ignition finished successfully Oct 27 23:39:09.148275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 23:39:09.175488 ignition[822]: Ignition 2.22.0 Oct 27 23:39:09.175507 ignition[822]: Stage: disks Oct 27 23:39:09.175632 ignition[822]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:39:09.175640 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:39:09.176380 ignition[822]: disks: disks passed Oct 27 23:39:09.179079 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 23:39:09.176442 ignition[822]: Ignition finished successfully Oct 27 23:39:09.181047 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 23:39:09.182563 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 23:39:09.184745 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 23:39:09.186494 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 23:39:09.188664 systemd[1]: Reached target basic.target - Basic System. Oct 27 23:39:09.191721 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 23:39:09.216581 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 27 23:39:09.220704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 23:39:09.223116 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 23:39:09.297991 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 23:39:09.301692 kernel: EXT4-fs (vda9): mounted filesystem 8f9c7d7f-b094-48f2-af83-87ee7d7d8042 r/w with ordered data mode. Quota mode: none. Oct 27 23:39:09.300591 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 23:39:09.307236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 23:39:09.310680 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 23:39:09.311774 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 23:39:09.311829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 23:39:09.311851 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 23:39:09.324712 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 23:39:09.326896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 23:39:09.335922 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Oct 27 23:39:09.335962 kernel: BTRFS info (device vda6): first mount of filesystem 1d73b0f7-269f-44d4-928d-157506a9bf3d Oct 27 23:39:09.337776 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:39:09.342036 kernel: BTRFS info (device vda6): turning on async discard Oct 27 23:39:09.342085 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 23:39:09.344191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 23:39:09.367791 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 23:39:09.372437 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Oct 27 23:39:09.376472 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 23:39:09.380585 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 23:39:09.459862 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 23:39:09.462354 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 23:39:09.463869 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 23:39:09.489037 kernel: BTRFS info (device vda6): last unmount of filesystem 1d73b0f7-269f-44d4-928d-157506a9bf3d Oct 27 23:39:09.506175 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 23:39:09.521835 ignition[953]: INFO : Ignition 2.22.0 Oct 27 23:39:09.521835 ignition[953]: INFO : Stage: mount Oct 27 23:39:09.523542 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:39:09.523542 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:39:09.523542 ignition[953]: INFO : mount: mount passed Oct 27 23:39:09.523542 ignition[953]: INFO : Ignition finished successfully Oct 27 23:39:09.526515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 23:39:09.528622 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 23:39:09.859872 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 23:39:09.861681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 23:39:09.895027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Oct 27 23:39:09.895343 kernel: BTRFS info (device vda6): first mount of filesystem 1d73b0f7-269f-44d4-928d-157506a9bf3d Oct 27 23:39:09.897141 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:39:09.900260 kernel: BTRFS info (device vda6): turning on async discard Oct 27 23:39:09.900301 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 23:39:09.902599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 23:39:09.946830 ignition[983]: INFO : Ignition 2.22.0 Oct 27 23:39:09.946830 ignition[983]: INFO : Stage: files Oct 27 23:39:09.948692 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:39:09.948692 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:39:09.948692 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Oct 27 23:39:09.952384 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 23:39:09.952384 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 23:39:09.955497 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 23:39:09.955497 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 23:39:09.955497 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 23:39:09.954917 unknown[983]: wrote ssh authorized keys file for user: core Oct 27 23:39:09.960937 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 27 23:39:09.960937 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 27 23:39:10.015788 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 23:39:10.147154 systemd-networkd[803]: eth0: Gained IPv6LL Oct 27 23:39:10.154675 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 23:39:10.156899 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 23:39:10.171897 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 23:39:10.171897 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 23:39:10.171897 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:39:10.171897 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:39:10.171897 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:39:10.171897 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 27 23:39:10.444904 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 23:39:10.692889 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:39:10.692889 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 23:39:10.696690 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 23:39:10.700055 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 23:39:10.716489 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 23:39:10.720427 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 23:39:10.723204 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 23:39:10.723204 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 23:39:10.723204 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 23:39:10.723204 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 23:39:10.723204 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 23:39:10.723204 ignition[983]: INFO : files: files passed Oct 27 23:39:10.723204 ignition[983]: INFO : Ignition finished successfully Oct 27 23:39:10.724105 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 23:39:10.726959 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 23:39:10.729506 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 23:39:10.743220 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 23:39:10.743317 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 23:39:10.748193 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 23:39:10.749617 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:39:10.749617 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:39:10.753589 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:39:10.754764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 23:39:10.757247 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 23:39:10.759459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 23:39:10.788587 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 23:39:10.788701 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 23:39:10.791588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 23:39:10.793275 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 23:39:10.795363 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 23:39:10.796285 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 23:39:10.831349 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 23:39:10.834124 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 23:39:10.857408 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:39:10.859902 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:39:10.862257 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 23:39:10.864183 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 23:39:10.864329 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 23:39:10.867090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 23:39:10.869710 systemd[1]: Stopped target basic.target - Basic System. Oct 27 23:39:10.871529 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 23:39:10.873321 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 23:39:10.877045 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 23:39:10.879774 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 23:39:10.881865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 23:39:10.884898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 23:39:10.887300 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 23:39:10.889693 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 23:39:10.891770 systemd[1]: Stopped target swap.target - Swaps. Oct 27 23:39:10.893616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 23:39:10.893761 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 23:39:10.896779 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:39:10.898901 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:39:10.901880 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 23:39:10.902101 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:39:10.904386 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 23:39:10.904523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 23:39:10.907478 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 23:39:10.907615 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 23:39:10.909641 systemd[1]: Stopped target paths.target - Path Units. Oct 27 23:39:10.911318 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 23:39:10.916130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:39:10.917502 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 23:39:10.919708 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 23:39:10.921340 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 23:39:10.921434 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 23:39:10.923096 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 23:39:10.923184 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 23:39:10.924941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 23:39:10.925098 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 23:39:10.927060 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 23:39:10.927170 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 23:39:10.929742 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 23:39:10.932484 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 23:39:10.933836 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 23:39:10.933959 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:39:10.935954 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 23:39:10.936075 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 23:39:10.941739 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 23:39:10.943176 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 23:39:10.951453 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 23:39:10.958008 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 23:39:10.958619 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 23:39:10.962983 ignition[1039]: INFO : Ignition 2.22.0 Oct 27 23:39:10.962983 ignition[1039]: INFO : Stage: umount Oct 27 23:39:10.962983 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:39:10.962983 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:39:10.962983 ignition[1039]: INFO : umount: umount passed Oct 27 23:39:10.962983 ignition[1039]: INFO : Ignition finished successfully Oct 27 23:39:10.963435 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 23:39:10.963579 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 23:39:10.969425 systemd[1]: Stopped target network.target - Network. Oct 27 23:39:10.970688 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 23:39:10.970767 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 23:39:10.972708 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 23:39:10.972769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 23:39:10.974766 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 23:39:10.974818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 23:39:10.976433 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 23:39:10.976475 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 23:39:10.978160 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 23:39:10.978211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 23:39:10.980444 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 23:39:10.982135 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 23:39:10.990830 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 23:39:10.990959 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 23:39:10.994074 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 27 23:39:10.994365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 23:39:10.994403 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:39:10.998149 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 27 23:39:10.998365 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 23:39:10.998477 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 23:39:11.002130 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 27 23:39:11.002613 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 23:39:11.004984 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 23:39:11.005047 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:39:11.008435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 23:39:11.009839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 23:39:11.009916 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 23:39:11.013396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 23:39:11.013445 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:39:11.016128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 23:39:11.016171 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 23:39:11.018442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:39:11.025703 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 27 23:39:11.033605 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 23:39:11.033755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:39:11.035542 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 23:39:11.035582 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 23:39:11.037646 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 23:39:11.037676 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:39:11.039765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 23:39:11.039823 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 23:39:11.042999 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 23:39:11.043104 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 23:39:11.045966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 23:39:11.046052 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:39:11.048823 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 23:39:11.050272 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 23:39:11.050327 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 23:39:11.053357 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 23:39:11.053402 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:39:11.056378 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 23:39:11.056420 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 23:39:11.059563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 23:39:11.059602 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:39:11.061926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 23:39:11.061975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:39:11.065994 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 23:39:11.066146 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 23:39:11.067841 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 23:39:11.069996 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 23:39:11.071996 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 23:39:11.074459 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 23:39:11.092524 systemd[1]: Switching root. Oct 27 23:39:11.131734 systemd-journald[243]: Journal stopped Oct 27 23:39:11.893837 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Oct 27 23:39:11.893887 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 23:39:11.893901 kernel: SELinux: policy capability open_perms=1 Oct 27 23:39:11.893910 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 23:39:11.893921 kernel: SELinux: policy capability always_check_network=0 Oct 27 23:39:11.893932 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 23:39:11.893941 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 23:39:11.893950 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 23:39:11.893961 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 23:39:11.893973 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 23:39:11.893983 kernel: audit: type=1403 audit(1761608351.298:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 23:39:11.893997 systemd[1]: Successfully loaded SELinux policy in 55.859ms. Oct 27 23:39:11.894082 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.548ms. Oct 27 23:39:11.894097 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 23:39:11.894108 systemd[1]: Detected virtualization kvm. Oct 27 23:39:11.894117 systemd[1]: Detected architecture arm64. Oct 27 23:39:11.894127 systemd[1]: Detected first boot. Oct 27 23:39:11.894139 systemd[1]: Initializing machine ID from VM UUID. Oct 27 23:39:11.894149 zram_generator::config[1088]: No configuration found. Oct 27 23:39:11.894160 kernel: NET: Registered PF_VSOCK protocol family Oct 27 23:39:11.894170 systemd[1]: Populated /etc with preset unit settings. Oct 27 23:39:11.894180 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 27 23:39:11.894190 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 23:39:11.894201 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 23:39:11.894211 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 23:39:11.894223 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 23:39:11.894233 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 23:39:11.894243 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 23:39:11.894256 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 23:39:11.894266 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 23:39:11.894276 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 23:39:11.894286 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 23:39:11.894296 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 23:39:11.894307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:39:11.894318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:39:11.894328 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 23:39:11.894338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 23:39:11.894348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 23:39:11.894358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 23:39:11.894368 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 27 23:39:11.894378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:39:11.894390 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:39:11.894400 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 23:39:11.894410 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 23:39:11.894421 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 23:39:11.894431 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 23:39:11.894441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:39:11.894451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 23:39:11.894460 systemd[1]: Reached target slices.target - Slice Units. Oct 27 23:39:11.894471 systemd[1]: Reached target swap.target - Swaps. Oct 27 23:39:11.894481 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 23:39:11.894492 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 23:39:11.894502 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 23:39:11.894515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:39:11.894526 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 23:39:11.894536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:39:11.894546 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 23:39:11.894556 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 23:39:11.894565 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 23:39:11.894576 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 23:39:11.894587 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 23:39:11.894597 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 23:39:11.894607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 23:39:11.894617 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 23:39:11.894627 systemd[1]: Reached target machines.target - Containers. Oct 27 23:39:11.894638 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 23:39:11.894648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:39:11.894658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 23:39:11.894669 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 23:39:11.894679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:39:11.894689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 23:39:11.894699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:39:11.894709 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 23:39:11.894719 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:39:11.894740 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 23:39:11.894752 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 23:39:11.894764 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 23:39:11.894774 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 23:39:11.894784 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 23:39:11.894793 kernel: loop: module loaded Oct 27 23:39:11.894804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:39:11.894813 kernel: fuse: init (API version 7.41) Oct 27 23:39:11.894823 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 23:39:11.894833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 23:39:11.894843 kernel: ACPI: bus type drm_connector registered Oct 27 23:39:11.894854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 23:39:11.894866 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 23:39:11.894876 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 23:39:11.894886 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 23:39:11.894896 systemd[1]: verity-setup.service: Deactivated successfully. Oct 27 23:39:11.894908 systemd[1]: Stopped verity-setup.service. Oct 27 23:39:11.894917 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 23:39:11.894927 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 23:39:11.894958 systemd-journald[1160]: Collecting audit messages is disabled. Oct 27 23:39:11.894981 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 23:39:11.894992 systemd-journald[1160]: Journal started Oct 27 23:39:11.895025 systemd-journald[1160]: Runtime Journal (/run/log/journal/53641e465f2e4243a41dbd341d9c79aa) is 6M, max 48.5M, 42.4M free. Oct 27 23:39:11.675762 systemd[1]: Queued start job for default target multi-user.target. Oct 27 23:39:11.684932 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 23:39:11.685342 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 23:39:11.897108 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 23:39:11.899122 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 23:39:11.899843 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 23:39:11.901173 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 23:39:11.902539 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 23:39:11.904150 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:39:11.905686 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 23:39:11.905874 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 23:39:11.907418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:39:11.907589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:39:11.908986 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 23:39:11.909184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 23:39:11.910481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:39:11.910644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:39:11.912181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 23:39:11.912347 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 23:39:11.913705 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:39:11.913899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:39:11.915361 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 23:39:11.916801 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 23:39:11.918408 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 23:39:11.919959 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 23:39:11.932745 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 23:39:11.935242 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 23:39:11.937470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 23:39:11.938938 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 23:39:11.938979 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 23:39:11.941047 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 23:39:11.949839 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 23:39:11.951213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:39:11.952515 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 23:39:11.954607 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 23:39:11.955991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 23:39:11.959220 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 23:39:11.960593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 23:39:11.963256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:39:11.965978 systemd-journald[1160]: Time spent on flushing to /var/log/journal/53641e465f2e4243a41dbd341d9c79aa is 19.137ms for 883 entries. Oct 27 23:39:11.965978 systemd-journald[1160]: System Journal (/var/log/journal/53641e465f2e4243a41dbd341d9c79aa) is 8M, max 195.6M, 187.6M free. Oct 27 23:39:11.991232 systemd-journald[1160]: Received client request to flush runtime journal. Oct 27 23:39:11.991267 kernel: loop0: detected capacity change from 0 to 100632 Oct 27 23:39:11.966103 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 23:39:11.973161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 23:39:11.978274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:39:11.979954 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 23:39:11.981583 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 23:39:11.985386 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 23:39:12.001493 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 23:39:12.001647 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Oct 27 23:39:12.001658 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Oct 27 23:39:12.003328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:39:12.005698 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 23:39:12.008404 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 23:39:12.009961 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 23:39:12.011032 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 23:39:12.017630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 23:39:12.030355 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 23:39:12.033031 kernel: loop1: detected capacity change from 0 to 207008 Oct 27 23:39:12.053695 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 23:39:12.056518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 23:39:12.063038 kernel: loop2: detected capacity change from 0 to 119368 Oct 27 23:39:12.081463 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Oct 27 23:39:12.081486 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Oct 27 23:39:12.084652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:39:12.097063 kernel: loop3: detected capacity change from 0 to 100632 Oct 27 23:39:12.104241 kernel: loop4: detected capacity change from 0 to 207008 Oct 27 23:39:12.110041 kernel: loop5: detected capacity change from 0 to 119368 Oct 27 23:39:12.116185 (sd-merge)[1232]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 27 23:39:12.116619 (sd-merge)[1232]: Merged extensions into '/usr'. Oct 27 23:39:12.120367 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 23:39:12.120386 systemd[1]: Reloading... Oct 27 23:39:12.180044 zram_generator::config[1258]: No configuration found. Oct 27 23:39:12.246087 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 23:39:12.320391 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 23:39:12.320640 systemd[1]: Reloading finished in 199 ms. Oct 27 23:39:12.356741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 23:39:12.362046 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 23:39:12.377214 systemd[1]: Starting ensure-sysext.service... Oct 27 23:39:12.379055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 23:39:12.392270 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 23:39:12.392307 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 23:39:12.392543 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 23:39:12.392747 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 23:39:12.393400 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 23:39:12.393604 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Oct 27 23:39:12.393650 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Oct 27 23:39:12.393681 systemd[1]: Reload requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Oct 27 23:39:12.393695 systemd[1]: Reloading... Oct 27 23:39:12.396540 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 23:39:12.396554 systemd-tmpfiles[1295]: Skipping /boot Oct 27 23:39:12.402499 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 23:39:12.402516 systemd-tmpfiles[1295]: Skipping /boot Oct 27 23:39:12.438069 zram_generator::config[1325]: No configuration found. Oct 27 23:39:12.561254 systemd[1]: Reloading finished in 167 ms. Oct 27 23:39:12.582643 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 23:39:12.588376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:39:12.601027 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:39:12.603493 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 23:39:12.621908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 23:39:12.626275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 23:39:12.629783 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:39:12.634046 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 23:39:12.639750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:39:12.643278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:39:12.647329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:39:12.650480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:39:12.652150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:39:12.652374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:39:12.654812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 23:39:12.662557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 23:39:12.668046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:39:12.668203 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Oct 27 23:39:12.668264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:39:12.670202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:39:12.670344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:39:12.672275 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 23:39:12.674250 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:39:12.675059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:39:12.677170 augenrules[1387]: No rules Oct 27 23:39:12.678972 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:39:12.679338 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:39:12.687113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:39:12.688412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:39:12.690652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:39:12.705323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:39:12.707169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:39:12.707296 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:39:12.708681 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 23:39:12.710222 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 23:39:12.751690 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:39:12.754588 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 23:39:12.757458 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 23:39:12.759886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:39:12.760064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:39:12.761793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:39:12.761939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:39:12.764539 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:39:12.764706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:39:12.767185 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 23:39:12.786521 systemd[1]: Finished ensure-sysext.service. Oct 27 23:39:12.795198 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 27 23:39:12.799127 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:39:12.800273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:39:12.803151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:39:12.807262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 23:39:12.815311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:39:12.822186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:39:12.825357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:39:12.825405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:39:12.831409 systemd-resolved[1361]: Positive Trust Anchors: Oct 27 23:39:12.831425 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 23:39:12.831457 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 23:39:12.833279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 23:39:12.837207 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 23:39:12.838389 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 23:39:12.838933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:39:12.840119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:39:12.840141 systemd-resolved[1361]: Defaulting to hostname 'linux'. Oct 27 23:39:12.842279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 23:39:12.844342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:39:12.847687 augenrules[1445]: /sbin/augenrules: No change Oct 27 23:39:12.847677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:39:12.850438 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 23:39:12.850622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 23:39:12.854586 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:39:12.855586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:39:12.857086 augenrules[1469]: No rules Oct 27 23:39:12.858412 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:39:12.858596 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:39:12.868148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 23:39:12.878079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:39:12.880290 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 23:39:12.881569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 23:39:12.881626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 23:39:12.901417 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 23:39:12.920802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:39:12.979209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:39:12.979688 systemd-networkd[1458]: lo: Link UP Oct 27 23:39:12.979700 systemd-networkd[1458]: lo: Gained carrier Oct 27 23:39:12.980879 systemd-networkd[1458]: Enumeration completed Oct 27 23:39:12.981071 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 23:39:12.981475 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:39:12.981486 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 23:39:12.982389 systemd-networkd[1458]: eth0: Link UP Oct 27 23:39:12.982622 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 23:39:12.982798 systemd-networkd[1458]: eth0: Gained carrier Oct 27 23:39:12.982817 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:39:12.984205 systemd[1]: Reached target network.target - Network. Oct 27 23:39:12.985213 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 23:39:12.986457 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 23:39:12.987821 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 23:39:12.989367 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 23:39:12.990611 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 23:39:12.990643 systemd[1]: Reached target paths.target - Path Units. Oct 27 23:39:12.991618 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 23:39:12.992833 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 23:39:12.994172 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 23:39:12.995400 systemd[1]: Reached target timers.target - Timer Units. Oct 27 23:39:12.996993 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 23:39:12.999443 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 23:39:13.000077 systemd-networkd[1458]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 23:39:13.001165 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Oct 27 23:39:12.577623 systemd-resolved[1361]: Clock change detected. Flushing caches. Oct 27 23:39:12.584246 systemd-journald[1160]: Time jumped backwards, rotating. Oct 27 23:39:12.577637 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 23:39:12.577672 systemd-timesyncd[1460]: Initial clock synchronization to Mon 2025-10-27 23:39:12.577572 UTC. Oct 27 23:39:12.578123 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 23:39:12.580130 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 23:39:12.581511 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 23:39:12.585868 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 23:39:12.587192 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 23:39:12.590211 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 23:39:12.592551 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 23:39:12.594555 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 23:39:12.596351 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 23:39:12.597346 systemd[1]: Reached target basic.target - Basic System. Oct 27 23:39:12.598346 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 23:39:12.598377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 23:39:12.602549 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 23:39:12.604615 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 23:39:12.607784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 23:39:12.609810 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 23:39:12.611752 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 23:39:12.612899 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 23:39:12.613837 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 23:39:12.616351 jq[1514]: false Oct 27 23:39:12.617865 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 23:39:12.619809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 23:39:12.621991 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 23:39:12.625242 extend-filesystems[1515]: Found /dev/vda6 Oct 27 23:39:12.626967 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 23:39:12.628476 extend-filesystems[1515]: Found /dev/vda9 Oct 27 23:39:12.629548 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 23:39:12.630002 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 23:39:12.630499 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 23:39:12.633834 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 23:39:12.635008 extend-filesystems[1515]: Checking size of /dev/vda9 Oct 27 23:39:12.636803 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 23:39:12.645540 extend-filesystems[1515]: Resized partition /dev/vda9 Oct 27 23:39:12.647187 extend-filesystems[1541]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 23:39:12.648970 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 23:39:12.652668 jq[1533]: true Oct 27 23:39:12.653244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 23:39:12.653431 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 23:39:12.653696 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 23:39:12.653878 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 23:39:12.654791 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 27 23:39:12.657225 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 23:39:12.657399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 23:39:12.679842 jq[1543]: true Oct 27 23:39:12.680677 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 23:39:12.683308 tar[1542]: linux-arm64/LICENSE Oct 27 23:39:12.683308 tar[1542]: linux-arm64/helm Oct 27 23:39:12.683852 update_engine[1528]: I20251027 23:39:12.681638 1528 main.cc:92] Flatcar Update Engine starting Oct 27 23:39:12.687807 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 27 23:39:12.711974 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 23:39:12.712105 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 23:39:12.712105 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 23:39:12.712105 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 27 23:39:12.722175 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Oct 27 23:39:12.712182 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 23:39:12.718713 dbus-daemon[1512]: [system] SELinux support is enabled Oct 27 23:39:12.729061 update_engine[1528]: I20251027 23:39:12.723510 1528 update_check_scheduler.cc:74] Next update check in 10m21s Oct 27 23:39:12.717870 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (Power Button) Oct 27 23:39:12.719217 systemd-logind[1526]: New seat seat0. Oct 27 23:39:12.721118 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 23:39:12.725296 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 23:39:12.728436 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 23:39:12.728456 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 23:39:12.729635 dbus-daemon[1512]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 27 23:39:12.730226 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 23:39:12.730252 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 23:39:12.731887 systemd[1]: Started update-engine.service - Update Engine. Oct 27 23:39:12.735438 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 23:39:12.756910 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Oct 27 23:39:12.760367 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 23:39:12.762650 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 23:39:12.807498 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 23:39:12.834992 containerd[1544]: time="2025-10-27T23:39:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 23:39:12.835786 containerd[1544]: time="2025-10-27T23:39:12.835736778Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 23:39:12.848733 containerd[1544]: time="2025-10-27T23:39:12.848672338Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.64µs" Oct 27 23:39:12.848733 containerd[1544]: time="2025-10-27T23:39:12.848720818Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 23:39:12.848733 containerd[1544]: time="2025-10-27T23:39:12.848741378Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 23:39:12.848927 containerd[1544]: time="2025-10-27T23:39:12.848904738Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 23:39:12.848952 containerd[1544]: time="2025-10-27T23:39:12.848927058Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 23:39:12.848989 containerd[1544]: time="2025-10-27T23:39:12.848957018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849026 containerd[1544]: time="2025-10-27T23:39:12.849008418Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849026 containerd[1544]: time="2025-10-27T23:39:12.849022818Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849268 containerd[1544]: time="2025-10-27T23:39:12.849245538Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849268 containerd[1544]: time="2025-10-27T23:39:12.849266218Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849308 containerd[1544]: time="2025-10-27T23:39:12.849277258Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849308 containerd[1544]: time="2025-10-27T23:39:12.849285618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849367 containerd[1544]: time="2025-10-27T23:39:12.849350138Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849571 containerd[1544]: time="2025-10-27T23:39:12.849548418Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849596 containerd[1544]: time="2025-10-27T23:39:12.849582098Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 23:39:12.849596 containerd[1544]: time="2025-10-27T23:39:12.849594098Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 23:39:12.849639 containerd[1544]: time="2025-10-27T23:39:12.849625058Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 23:39:12.851235 containerd[1544]: time="2025-10-27T23:39:12.851149578Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 23:39:12.851334 containerd[1544]: time="2025-10-27T23:39:12.851311338Z" level=info msg="metadata content store policy set" policy=shared Oct 27 23:39:12.855255 containerd[1544]: time="2025-10-27T23:39:12.855210698Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 23:39:12.855307 containerd[1544]: time="2025-10-27T23:39:12.855293738Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 23:39:12.855325 containerd[1544]: time="2025-10-27T23:39:12.855312578Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 23:39:12.855341 containerd[1544]: time="2025-10-27T23:39:12.855324818Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 23:39:12.855469 containerd[1544]: time="2025-10-27T23:39:12.855439178Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 23:39:12.855501 containerd[1544]: time="2025-10-27T23:39:12.855469338Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 23:39:12.855501 containerd[1544]: time="2025-10-27T23:39:12.855494698Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 23:39:12.855550 containerd[1544]: time="2025-10-27T23:39:12.855508698Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 23:39:12.855550 containerd[1544]: time="2025-10-27T23:39:12.855520898Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 23:39:12.855550 containerd[1544]: time="2025-10-27T23:39:12.855536298Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 23:39:12.855550 containerd[1544]: time="2025-10-27T23:39:12.855546738Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 23:39:12.855607 containerd[1544]: time="2025-10-27T23:39:12.855561738Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 23:39:12.855722 containerd[1544]: time="2025-10-27T23:39:12.855698378Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 23:39:12.855747 containerd[1544]: time="2025-10-27T23:39:12.855729218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 23:39:12.855799 containerd[1544]: time="2025-10-27T23:39:12.855745898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 23:39:12.855799 containerd[1544]: time="2025-10-27T23:39:12.855757658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 23:39:12.855838 containerd[1544]: time="2025-10-27T23:39:12.855800498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 23:39:12.855838 containerd[1544]: time="2025-10-27T23:39:12.855818298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 23:39:12.855838 containerd[1544]: time="2025-10-27T23:39:12.855830178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 23:39:12.855895 containerd[1544]: time="2025-10-27T23:39:12.855842218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 23:39:12.855895 containerd[1544]: time="2025-10-27T23:39:12.855863418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 23:39:12.855895 containerd[1544]: time="2025-10-27T23:39:12.855874858Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 23:39:12.855895 containerd[1544]: time="2025-10-27T23:39:12.855885138Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 23:39:12.856115 containerd[1544]: time="2025-10-27T23:39:12.856077778Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 23:39:12.856115 containerd[1544]: time="2025-10-27T23:39:12.856101018Z" level=info msg="Start snapshots syncer" Oct 27 23:39:12.857821 containerd[1544]: time="2025-10-27T23:39:12.856252618Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 23:39:12.857821 containerd[1544]: time="2025-10-27T23:39:12.856947098Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857052498Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857391178Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857854898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857889818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857900938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857911338Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857923938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857935578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 23:39:12.857950 containerd[1544]: time="2025-10-27T23:39:12.857947698Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.857974818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.857987778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.857998778Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.858052778Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.858067578Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.858076738Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 23:39:12.858095 containerd[1544]: time="2025-10-27T23:39:12.858088538Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 23:39:12.858200 containerd[1544]: time="2025-10-27T23:39:12.858096858Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 23:39:12.858200 containerd[1544]: time="2025-10-27T23:39:12.858188098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 23:39:12.858233 containerd[1544]: time="2025-10-27T23:39:12.858201818Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 23:39:12.858303 containerd[1544]: time="2025-10-27T23:39:12.858283698Z" level=info msg="runtime interface created" Oct 27 23:39:12.858303 containerd[1544]: time="2025-10-27T23:39:12.858294578Z" level=info msg="created NRI interface" Oct 27 23:39:12.858346 containerd[1544]: time="2025-10-27T23:39:12.858308978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 23:39:12.858346 containerd[1544]: time="2025-10-27T23:39:12.858322778Z" level=info msg="Connect containerd service" Oct 27 23:39:12.858377 containerd[1544]: time="2025-10-27T23:39:12.858348098Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 23:39:12.860666 containerd[1544]: time="2025-10-27T23:39:12.860626658Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 23:39:12.885405 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 23:39:12.911182 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 23:39:12.916022 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 23:39:12.930824 containerd[1544]: time="2025-10-27T23:39:12.927811578Z" level=info msg="Start subscribing containerd event" Oct 27 23:39:12.930904 containerd[1544]: time="2025-10-27T23:39:12.930853578Z" level=info msg="Start recovering state" Oct 27 23:39:12.930923 containerd[1544]: time="2025-10-27T23:39:12.928123538Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.930956618Z" level=info msg="Start event monitor" Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.930977298Z" level=info msg="Start cni network conf syncer for default" Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.930985858Z" level=info msg="Start streaming server" Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.930994778Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.930999458Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.931002738Z" level=info msg="runtime interface starting up..." Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.931062778Z" level=info msg="starting plugins..." Oct 27 23:39:12.931130 containerd[1544]: time="2025-10-27T23:39:12.931080058Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 23:39:12.931436 containerd[1544]: time="2025-10-27T23:39:12.931174138Z" level=info msg="containerd successfully booted in 0.096579s" Oct 27 23:39:12.931269 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 23:39:12.935385 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 23:39:12.936109 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 23:39:12.939013 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 23:39:12.960435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 23:39:12.963395 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 23:39:12.965514 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 27 23:39:12.969058 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 23:39:13.013127 tar[1542]: linux-arm64/README.md Oct 27 23:39:13.037109 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 23:39:14.074962 systemd-networkd[1458]: eth0: Gained IPv6LL Oct 27 23:39:14.078842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 23:39:14.081023 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 23:39:14.083649 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 23:39:14.086345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:14.100103 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 23:39:14.114354 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 23:39:14.114573 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 23:39:14.116284 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 23:39:14.119532 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 23:39:14.649022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:14.650719 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 23:39:14.652721 systemd[1]: Startup finished in 2.074s (kernel) + 4.684s (initrd) + 3.835s (userspace) = 10.593s. Oct 27 23:39:14.653617 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:39:14.995338 kubelet[1645]: E1027 23:39:14.995241 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:39:14.997708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:39:14.997858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:39:14.998133 systemd[1]: kubelet.service: Consumed 748ms CPU time, 257.7M memory peak. Oct 27 23:39:19.385167 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 23:39:19.386159 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:59910.service - OpenSSH per-connection server daemon (10.0.0.1:59910). Oct 27 23:39:19.448163 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 59910 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:19.450054 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:19.456089 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 23:39:19.456949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 23:39:19.463979 systemd-logind[1526]: New session 1 of user core. Oct 27 23:39:19.479102 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 23:39:19.481532 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 23:39:19.495886 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 23:39:19.498275 systemd-logind[1526]: New session c1 of user core. Oct 27 23:39:19.614202 systemd[1663]: Queued start job for default target default.target. Oct 27 23:39:19.627824 systemd[1663]: Created slice app.slice - User Application Slice. Oct 27 23:39:19.627854 systemd[1663]: Reached target paths.target - Paths. Oct 27 23:39:19.627896 systemd[1663]: Reached target timers.target - Timers. Oct 27 23:39:19.629117 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 23:39:19.639346 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 23:39:19.639418 systemd[1663]: Reached target sockets.target - Sockets. Oct 27 23:39:19.639474 systemd[1663]: Reached target basic.target - Basic System. Oct 27 23:39:19.639506 systemd[1663]: Reached target default.target - Main User Target. Oct 27 23:39:19.639531 systemd[1663]: Startup finished in 135ms. Oct 27 23:39:19.639644 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 23:39:19.640950 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 23:39:19.697261 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:59914.service - OpenSSH per-connection server daemon (10.0.0.1:59914). Oct 27 23:39:19.742845 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 59914 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:19.744099 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:19.748874 systemd-logind[1526]: New session 2 of user core. Oct 27 23:39:19.758980 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 23:39:19.809345 sshd[1677]: Connection closed by 10.0.0.1 port 59914 Oct 27 23:39:19.809824 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Oct 27 23:39:19.822163 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:59914.service: Deactivated successfully. Oct 27 23:39:19.823784 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 23:39:19.824836 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Oct 27 23:39:19.827550 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:59922.service - OpenSSH per-connection server daemon (10.0.0.1:59922). Oct 27 23:39:19.828805 systemd-logind[1526]: Removed session 2. Oct 27 23:39:19.877174 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 59922 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:19.878308 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:19.882665 systemd-logind[1526]: New session 3 of user core. Oct 27 23:39:19.889943 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 23:39:19.938211 sshd[1686]: Connection closed by 10.0.0.1 port 59922 Oct 27 23:39:19.938674 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Oct 27 23:39:19.948749 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:59922.service: Deactivated successfully. Oct 27 23:39:19.951146 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 23:39:19.953172 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Oct 27 23:39:19.957033 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:59928.service - OpenSSH per-connection server daemon (10.0.0.1:59928). Oct 27 23:39:19.958840 systemd-logind[1526]: Removed session 3. Oct 27 23:39:20.016609 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 59928 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:20.018441 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:20.023655 systemd-logind[1526]: New session 4 of user core. Oct 27 23:39:20.034959 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 23:39:20.086655 sshd[1695]: Connection closed by 10.0.0.1 port 59928 Oct 27 23:39:20.086952 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Oct 27 23:39:20.097802 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:59928.service: Deactivated successfully. Oct 27 23:39:20.100075 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 23:39:20.101001 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Oct 27 23:39:20.102763 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:59940.service - OpenSSH per-connection server daemon (10.0.0.1:59940). Oct 27 23:39:20.103729 systemd-logind[1526]: Removed session 4. Oct 27 23:39:20.153078 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 59940 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:20.154294 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:20.159603 systemd-logind[1526]: New session 5 of user core. Oct 27 23:39:20.166972 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 23:39:20.226317 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 23:39:20.226602 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:39:20.242658 sudo[1705]: pam_unix(sudo:session): session closed for user root Oct 27 23:39:20.244199 sshd[1704]: Connection closed by 10.0.0.1 port 59940 Oct 27 23:39:20.244523 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Oct 27 23:39:20.255788 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:59940.service: Deactivated successfully. Oct 27 23:39:20.259076 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 23:39:20.259838 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Oct 27 23:39:20.262089 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:59950.service - OpenSSH per-connection server daemon (10.0.0.1:59950). Oct 27 23:39:20.263075 systemd-logind[1526]: Removed session 5. Oct 27 23:39:20.326288 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 59950 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:20.327573 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:20.332262 systemd-logind[1526]: New session 6 of user core. Oct 27 23:39:20.343928 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 23:39:20.395852 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 23:39:20.396404 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:39:20.473719 sudo[1716]: pam_unix(sudo:session): session closed for user root Oct 27 23:39:20.479143 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 23:39:20.479394 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:39:20.487637 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:39:20.531239 augenrules[1738]: No rules Oct 27 23:39:20.532358 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:39:20.533822 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:39:20.535154 sudo[1715]: pam_unix(sudo:session): session closed for user root Oct 27 23:39:20.536872 sshd[1714]: Connection closed by 10.0.0.1 port 59950 Oct 27 23:39:20.537590 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Oct 27 23:39:20.550268 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:59950.service: Deactivated successfully. Oct 27 23:39:20.552710 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 23:39:20.553625 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Oct 27 23:39:20.556667 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:59952.service - OpenSSH per-connection server daemon (10.0.0.1:59952). Oct 27 23:39:20.557344 systemd-logind[1526]: Removed session 6. Oct 27 23:39:20.608163 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 59952 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:39:20.609737 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:39:20.614282 systemd-logind[1526]: New session 7 of user core. Oct 27 23:39:20.630008 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 23:39:20.681581 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 23:39:20.681859 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:39:20.954344 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 23:39:20.978168 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 23:39:21.190497 dockerd[1771]: time="2025-10-27T23:39:21.190062578Z" level=info msg="Starting up" Oct 27 23:39:21.191094 dockerd[1771]: time="2025-10-27T23:39:21.191067978Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 23:39:21.202126 dockerd[1771]: time="2025-10-27T23:39:21.202069138Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 23:39:21.233604 dockerd[1771]: time="2025-10-27T23:39:21.233343098Z" level=info msg="Loading containers: start." Oct 27 23:39:21.243803 kernel: Initializing XFRM netlink socket Oct 27 23:39:21.442886 systemd-networkd[1458]: docker0: Link UP Oct 27 23:39:21.447277 dockerd[1771]: time="2025-10-27T23:39:21.447228938Z" level=info msg="Loading containers: done." Oct 27 23:39:21.459332 dockerd[1771]: time="2025-10-27T23:39:21.459281658Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 23:39:21.459432 dockerd[1771]: time="2025-10-27T23:39:21.459358658Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 23:39:21.459464 dockerd[1771]: time="2025-10-27T23:39:21.459429498Z" level=info msg="Initializing buildkit" Oct 27 23:39:21.479373 dockerd[1771]: time="2025-10-27T23:39:21.479341178Z" level=info msg="Completed buildkit initialization" Oct 27 23:39:21.484244 dockerd[1771]: time="2025-10-27T23:39:21.484083458Z" level=info msg="Daemon has completed initialization" Oct 27 23:39:21.484327 dockerd[1771]: time="2025-10-27T23:39:21.484170658Z" level=info msg="API listen on /run/docker.sock" Oct 27 23:39:21.484512 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 23:39:22.075581 containerd[1544]: time="2025-10-27T23:39:22.075540818Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 27 23:39:22.723447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224898185.mount: Deactivated successfully. Oct 27 23:39:23.674153 containerd[1544]: time="2025-10-27T23:39:23.674093658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:23.675325 containerd[1544]: time="2025-10-27T23:39:23.675255818Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Oct 27 23:39:23.676390 containerd[1544]: time="2025-10-27T23:39:23.676344978Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:23.679797 containerd[1544]: time="2025-10-27T23:39:23.679270538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:23.681905 containerd[1544]: time="2025-10-27T23:39:23.681863778Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.60628072s" Oct 27 23:39:23.681978 containerd[1544]: time="2025-10-27T23:39:23.681908658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 27 23:39:23.682501 containerd[1544]: time="2025-10-27T23:39:23.682469418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 27 23:39:24.795920 containerd[1544]: time="2025-10-27T23:39:24.795871658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:24.797186 containerd[1544]: time="2025-10-27T23:39:24.797159938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Oct 27 23:39:24.798156 containerd[1544]: time="2025-10-27T23:39:24.798127778Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:24.801144 containerd[1544]: time="2025-10-27T23:39:24.801109778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:24.802111 containerd[1544]: time="2025-10-27T23:39:24.802077098Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.11950172s" Oct 27 23:39:24.802149 containerd[1544]: time="2025-10-27T23:39:24.802110898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 27 23:39:24.802874 containerd[1544]: time="2025-10-27T23:39:24.802849458Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 27 23:39:25.248231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 23:39:25.249804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:25.387542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:25.391505 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:39:25.431290 kubelet[2057]: E1027 23:39:25.429722 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:39:25.434794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:39:25.434915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:39:25.435912 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.2M memory peak. Oct 27 23:39:26.039705 containerd[1544]: time="2025-10-27T23:39:26.039647258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:26.041096 containerd[1544]: time="2025-10-27T23:39:26.041065858Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Oct 27 23:39:26.043838 containerd[1544]: time="2025-10-27T23:39:26.043794018Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:26.046317 containerd[1544]: time="2025-10-27T23:39:26.046291258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:26.047270 containerd[1544]: time="2025-10-27T23:39:26.047241858Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.24435808s" Oct 27 23:39:26.047310 containerd[1544]: time="2025-10-27T23:39:26.047277738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 27 23:39:26.047691 containerd[1544]: time="2025-10-27T23:39:26.047668578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 27 23:39:27.040940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319196782.mount: Deactivated successfully. Oct 27 23:39:27.273378 containerd[1544]: time="2025-10-27T23:39:27.273320098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:27.273957 containerd[1544]: time="2025-10-27T23:39:27.273920138Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Oct 27 23:39:27.274785 containerd[1544]: time="2025-10-27T23:39:27.274730178Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:27.276557 containerd[1544]: time="2025-10-27T23:39:27.276519418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:27.277285 containerd[1544]: time="2025-10-27T23:39:27.277245138Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.22954668s" Oct 27 23:39:27.277285 containerd[1544]: time="2025-10-27T23:39:27.277280898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 27 23:39:27.277798 containerd[1544]: time="2025-10-27T23:39:27.277727058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 27 23:39:27.809861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294299581.mount: Deactivated successfully. Oct 27 23:39:28.640695 containerd[1544]: time="2025-10-27T23:39:28.640225258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:28.641025 containerd[1544]: time="2025-10-27T23:39:28.640711098Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Oct 27 23:39:28.641852 containerd[1544]: time="2025-10-27T23:39:28.641826298Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:28.674784 containerd[1544]: time="2025-10-27T23:39:28.674723338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:28.676211 containerd[1544]: time="2025-10-27T23:39:28.676166338Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3982148s" Oct 27 23:39:28.676211 containerd[1544]: time="2025-10-27T23:39:28.676202978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 27 23:39:28.676724 containerd[1544]: time="2025-10-27T23:39:28.676702498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 23:39:29.086912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234893331.mount: Deactivated successfully. Oct 27 23:39:29.091510 containerd[1544]: time="2025-10-27T23:39:29.091465658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:39:29.092055 containerd[1544]: time="2025-10-27T23:39:29.092018218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 27 23:39:29.093044 containerd[1544]: time="2025-10-27T23:39:29.092982098Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:39:29.095797 containerd[1544]: time="2025-10-27T23:39:29.095037258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:39:29.095888 containerd[1544]: time="2025-10-27T23:39:29.095863098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 419.13028ms" Oct 27 23:39:29.095917 containerd[1544]: time="2025-10-27T23:39:29.095891058Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 27 23:39:29.096478 containerd[1544]: time="2025-10-27T23:39:29.096439258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 27 23:39:29.652637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996251509.mount: Deactivated successfully. Oct 27 23:39:31.398268 containerd[1544]: time="2025-10-27T23:39:31.398187418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:31.401832 containerd[1544]: time="2025-10-27T23:39:31.401740098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Oct 27 23:39:31.401944 containerd[1544]: time="2025-10-27T23:39:31.401842898Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:31.404811 containerd[1544]: time="2025-10-27T23:39:31.404717058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:31.406071 containerd[1544]: time="2025-10-27T23:39:31.406041898Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.30948164s" Oct 27 23:39:31.406130 containerd[1544]: time="2025-10-27T23:39:31.406076098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 27 23:39:35.543898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 23:39:35.545863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:35.642065 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 23:39:35.642168 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 23:39:35.643467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:35.643722 systemd[1]: kubelet.service: Consumed 64ms CPU time, 70.3M memory peak. Oct 27 23:39:35.646048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:35.665700 systemd[1]: Reload requested from client PID 2222 ('systemctl') (unit session-7.scope)... Oct 27 23:39:35.665717 systemd[1]: Reloading... Oct 27 23:39:35.729845 zram_generator::config[2268]: No configuration found. Oct 27 23:39:35.887904 systemd[1]: Reloading finished in 221 ms. Oct 27 23:39:35.931258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:35.934628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:35.935355 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 23:39:35.935575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:35.935619 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95.1M memory peak. Oct 27 23:39:35.938434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:36.055697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:36.059033 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 23:39:36.093831 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:39:36.093831 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 23:39:36.093831 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:39:36.094165 kubelet[2312]: I1027 23:39:36.093880 2312 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 23:39:36.724800 kubelet[2312]: I1027 23:39:36.723819 2312 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 27 23:39:36.724800 kubelet[2312]: I1027 23:39:36.723866 2312 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 23:39:36.724800 kubelet[2312]: I1027 23:39:36.724141 2312 server.go:954] "Client rotation is on, will bootstrap in background" Oct 27 23:39:36.994798 kubelet[2312]: E1027 23:39:36.994652 2312 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:39:36.997862 kubelet[2312]: I1027 23:39:36.997678 2312 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 23:39:37.008540 kubelet[2312]: I1027 23:39:37.008514 2312 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 23:39:37.011623 kubelet[2312]: I1027 23:39:37.011600 2312 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 23:39:37.012351 kubelet[2312]: I1027 23:39:37.012297 2312 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 23:39:37.012552 kubelet[2312]: I1027 23:39:37.012344 2312 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 23:39:37.012657 kubelet[2312]: I1027 23:39:37.012614 2312 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 23:39:37.012657 kubelet[2312]: I1027 23:39:37.012626 2312 container_manager_linux.go:304] "Creating device plugin manager" Oct 27 23:39:37.012896 kubelet[2312]: I1027 23:39:37.012861 2312 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:39:37.016014 kubelet[2312]: I1027 23:39:37.015956 2312 kubelet.go:446] "Attempting to sync node with API server" Oct 27 23:39:37.016014 kubelet[2312]: I1027 23:39:37.015981 2312 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 23:39:37.016014 kubelet[2312]: I1027 23:39:37.016011 2312 kubelet.go:352] "Adding apiserver pod source" Oct 27 23:39:37.016014 kubelet[2312]: I1027 23:39:37.016022 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 23:39:37.019790 kubelet[2312]: W1027 23:39:37.019715 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 27 23:39:37.019879 kubelet[2312]: I1027 23:39:37.019809 2312 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 23:39:37.019879 kubelet[2312]: E1027 23:39:37.019805 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:39:37.020178 kubelet[2312]: W1027 23:39:37.020138 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 27 23:39:37.020283 kubelet[2312]: E1027 23:39:37.020264 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:39:37.020454 kubelet[2312]: I1027 23:39:37.020439 2312 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 27 23:39:37.020569 kubelet[2312]: W1027 23:39:37.020554 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 23:39:37.021753 kubelet[2312]: I1027 23:39:37.021663 2312 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 23:39:37.021753 kubelet[2312]: I1027 23:39:37.021717 2312 server.go:1287] "Started kubelet" Oct 27 23:39:37.021902 kubelet[2312]: I1027 23:39:37.021870 2312 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 23:39:37.024220 kubelet[2312]: I1027 23:39:37.024144 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 23:39:37.024398 kubelet[2312]: I1027 23:39:37.024369 2312 server.go:479] "Adding debug handlers to kubelet server" Oct 27 23:39:37.024584 kubelet[2312]: I1027 23:39:37.024557 2312 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 23:39:37.026083 kubelet[2312]: E1027 23:39:37.025830 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18727d7c49bae332 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 23:39:37.021686578 +0000 UTC m=+0.959923001,LastTimestamp:2025-10-27 23:39:37.021686578 +0000 UTC m=+0.959923001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 23:39:37.029132 kubelet[2312]: I1027 23:39:37.029111 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 23:39:37.029534 kubelet[2312]: I1027 23:39:37.029516 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 23:39:37.030071 kubelet[2312]: E1027 23:39:37.030043 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:39:37.030296 kubelet[2312]: I1027 23:39:37.030190 2312 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 23:39:37.030667 kubelet[2312]: I1027 23:39:37.030647 2312 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 23:39:37.030760 kubelet[2312]: E1027 23:39:37.030701 2312 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 23:39:37.030913 kubelet[2312]: I1027 23:39:37.030904 2312 reconciler.go:26] "Reconciler: start to sync state" Oct 27 23:39:37.031269 kubelet[2312]: E1027 23:39:37.031200 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Oct 27 23:39:37.031356 kubelet[2312]: I1027 23:39:37.031334 2312 factory.go:221] Registration of the systemd container factory successfully Oct 27 23:39:37.031456 kubelet[2312]: I1027 23:39:37.031433 2312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 23:39:37.031636 kubelet[2312]: W1027 23:39:37.031549 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 27 23:39:37.031797 kubelet[2312]: E1027 23:39:37.031685 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:39:37.032513 kubelet[2312]: I1027 23:39:37.032490 2312 factory.go:221] Registration of the containerd container factory successfully Oct 27 23:39:37.046534 kubelet[2312]: I1027 23:39:37.046501 2312 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 23:39:37.046534 kubelet[2312]: I1027 23:39:37.046528 2312 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 23:39:37.046662 kubelet[2312]: I1027 23:39:37.046546 2312 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:39:37.048837 kubelet[2312]: I1027 23:39:37.048815 2312 policy_none.go:49] "None policy: Start" Oct 27 23:39:37.048837 kubelet[2312]: I1027 23:39:37.048838 2312 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 23:39:37.048915 kubelet[2312]: I1027 23:39:37.048848 2312 state_mem.go:35] "Initializing new in-memory state store" Oct 27 23:39:37.051737 kubelet[2312]: I1027 23:39:37.051700 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 27 23:39:37.054244 kubelet[2312]: I1027 23:39:37.054205 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 27 23:39:37.054244 kubelet[2312]: I1027 23:39:37.054243 2312 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 27 23:39:37.054348 kubelet[2312]: I1027 23:39:37.054262 2312 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 23:39:37.054348 kubelet[2312]: I1027 23:39:37.054279 2312 kubelet.go:2382] "Starting kubelet main sync loop" Oct 27 23:39:37.054348 kubelet[2312]: E1027 23:39:37.054322 2312 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 23:39:37.055129 kubelet[2312]: W1027 23:39:37.055087 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 27 23:39:37.055531 kubelet[2312]: E1027 23:39:37.055410 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:39:37.055994 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 23:39:37.071653 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 23:39:37.076718 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 23:39:37.090902 kubelet[2312]: I1027 23:39:37.090867 2312 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 27 23:39:37.091133 kubelet[2312]: I1027 23:39:37.091115 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 23:39:37.091193 kubelet[2312]: I1027 23:39:37.091130 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 23:39:37.091374 kubelet[2312]: I1027 23:39:37.091355 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 23:39:37.092868 kubelet[2312]: E1027 23:39:37.092843 2312 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 23:39:37.092929 kubelet[2312]: E1027 23:39:37.092894 2312 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 23:39:37.161946 systemd[1]: Created slice kubepods-burstable-pod0bd79a7bdf6d693d48729d2b5d11e801.slice - libcontainer container kubepods-burstable-pod0bd79a7bdf6d693d48729d2b5d11e801.slice. Oct 27 23:39:37.177617 kubelet[2312]: E1027 23:39:37.177581 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:37.180664 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 27 23:39:37.182401 kubelet[2312]: E1027 23:39:37.182361 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:37.190300 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 27 23:39:37.191994 kubelet[2312]: E1027 23:39:37.191964 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:37.192737 kubelet[2312]: I1027 23:39:37.192715 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:39:37.193154 kubelet[2312]: E1027 23:39:37.193111 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Oct 27 23:39:37.231626 kubelet[2312]: I1027 23:39:37.231581 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:37.231626 kubelet[2312]: I1027 23:39:37.231622 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:37.231705 kubelet[2312]: I1027 23:39:37.231644 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:37.231705 kubelet[2312]: E1027 23:39:37.231621 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Oct 27 23:39:37.231705 kubelet[2312]: I1027 23:39:37.231661 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:37.231764 kubelet[2312]: I1027 23:39:37.231703 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:37.231764 kubelet[2312]: I1027 23:39:37.231723 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:37.231764 kubelet[2312]: I1027 23:39:37.231741 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:37.231764 kubelet[2312]: I1027 23:39:37.231757 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:37.231868 kubelet[2312]: I1027 23:39:37.231808 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:37.394392 kubelet[2312]: I1027 23:39:37.394361 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:39:37.394758 kubelet[2312]: E1027 23:39:37.394710 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Oct 27 23:39:37.479066 containerd[1544]: time="2025-10-27T23:39:37.478997498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0bd79a7bdf6d693d48729d2b5d11e801,Namespace:kube-system,Attempt:0,}" Oct 27 23:39:37.483694 containerd[1544]: time="2025-10-27T23:39:37.483653898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 27 23:39:37.493522 containerd[1544]: time="2025-10-27T23:39:37.493479018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 27 23:39:37.502970 containerd[1544]: time="2025-10-27T23:39:37.502921578Z" level=info msg="connecting to shim 50b8bdd210d632e3f1108bf1d0ecee113f492d4b02ce195148657613d248c1cb" address="unix:///run/containerd/s/5df5ad60e1bb1a8c68e5e28d6d4928cefe70f8f0dd55df955a903902243fbe66" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:39:37.523346 containerd[1544]: time="2025-10-27T23:39:37.523298818Z" level=info msg="connecting to shim 8d22d9bcab72fd8b9982995c9546d97d551ef4c6d2386054ad528d792fac65e2" address="unix:///run/containerd/s/0a4f2c3537ce85fcc1858232a26eda11d9936ca56eae533378e5e799e07ef6e3" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:39:37.544043 systemd[1]: Started cri-containerd-50b8bdd210d632e3f1108bf1d0ecee113f492d4b02ce195148657613d248c1cb.scope - libcontainer container 50b8bdd210d632e3f1108bf1d0ecee113f492d4b02ce195148657613d248c1cb. Oct 27 23:39:37.546544 containerd[1544]: time="2025-10-27T23:39:37.546494538Z" level=info msg="connecting to shim 0dc2338f6e29c89c1d0d7e526b7b2ca54952cf1d6ff62ed401d096d4f52fa1eb" address="unix:///run/containerd/s/96ca59171cc8e0edf2903786e981cdc7fe49b3c3f744bedf6dea12fc828f843c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:39:37.550035 systemd[1]: Started cri-containerd-8d22d9bcab72fd8b9982995c9546d97d551ef4c6d2386054ad528d792fac65e2.scope - libcontainer container 8d22d9bcab72fd8b9982995c9546d97d551ef4c6d2386054ad528d792fac65e2. Oct 27 23:39:37.580500 systemd[1]: Started cri-containerd-0dc2338f6e29c89c1d0d7e526b7b2ca54952cf1d6ff62ed401d096d4f52fa1eb.scope - libcontainer container 0dc2338f6e29c89c1d0d7e526b7b2ca54952cf1d6ff62ed401d096d4f52fa1eb. Oct 27 23:39:37.595418 containerd[1544]: time="2025-10-27T23:39:37.595358698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0bd79a7bdf6d693d48729d2b5d11e801,Namespace:kube-system,Attempt:0,} returns sandbox id \"50b8bdd210d632e3f1108bf1d0ecee113f492d4b02ce195148657613d248c1cb\"" Oct 27 23:39:37.599434 containerd[1544]: time="2025-10-27T23:39:37.598881418Z" level=info msg="CreateContainer within sandbox \"50b8bdd210d632e3f1108bf1d0ecee113f492d4b02ce195148657613d248c1cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 23:39:37.606730 containerd[1544]: time="2025-10-27T23:39:37.606690098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d22d9bcab72fd8b9982995c9546d97d551ef4c6d2386054ad528d792fac65e2\"" Oct 27 23:39:37.608909 containerd[1544]: time="2025-10-27T23:39:37.608878698Z" level=info msg="CreateContainer within sandbox \"8d22d9bcab72fd8b9982995c9546d97d551ef4c6d2386054ad528d792fac65e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 23:39:37.610828 containerd[1544]: time="2025-10-27T23:39:37.610787458Z" level=info msg="Container 7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:39:37.621560 containerd[1544]: time="2025-10-27T23:39:37.621511458Z" level=info msg="CreateContainer within sandbox \"50b8bdd210d632e3f1108bf1d0ecee113f492d4b02ce195148657613d248c1cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56\"" Oct 27 23:39:37.622338 containerd[1544]: time="2025-10-27T23:39:37.622310498Z" level=info msg="Container ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:39:37.623269 containerd[1544]: time="2025-10-27T23:39:37.623242698Z" level=info msg="StartContainer for \"7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56\"" Oct 27 23:39:37.625587 containerd[1544]: time="2025-10-27T23:39:37.625556458Z" level=info msg="connecting to shim 7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56" address="unix:///run/containerd/s/5df5ad60e1bb1a8c68e5e28d6d4928cefe70f8f0dd55df955a903902243fbe66" protocol=ttrpc version=3 Oct 27 23:39:37.632790 containerd[1544]: time="2025-10-27T23:39:37.632724298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dc2338f6e29c89c1d0d7e526b7b2ca54952cf1d6ff62ed401d096d4f52fa1eb\"" Oct 27 23:39:37.633073 kubelet[2312]: E1027 23:39:37.633038 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Oct 27 23:39:37.633509 containerd[1544]: time="2025-10-27T23:39:37.633463898Z" level=info msg="CreateContainer within sandbox \"8d22d9bcab72fd8b9982995c9546d97d551ef4c6d2386054ad528d792fac65e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca\"" Oct 27 23:39:37.634809 containerd[1544]: time="2025-10-27T23:39:37.634786858Z" level=info msg="StartContainer for \"ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca\"" Oct 27 23:39:37.636420 containerd[1544]: time="2025-10-27T23:39:37.636390378Z" level=info msg="CreateContainer within sandbox \"0dc2338f6e29c89c1d0d7e526b7b2ca54952cf1d6ff62ed401d096d4f52fa1eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 23:39:37.637915 containerd[1544]: time="2025-10-27T23:39:37.637885938Z" level=info msg="connecting to shim ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca" address="unix:///run/containerd/s/0a4f2c3537ce85fcc1858232a26eda11d9936ca56eae533378e5e799e07ef6e3" protocol=ttrpc version=3 Oct 27 23:39:37.648489 containerd[1544]: time="2025-10-27T23:39:37.647652778Z" level=info msg="Container f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:39:37.649435 systemd[1]: Started cri-containerd-7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56.scope - libcontainer container 7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56. Oct 27 23:39:37.663751 containerd[1544]: time="2025-10-27T23:39:37.662746458Z" level=info msg="CreateContainer within sandbox \"0dc2338f6e29c89c1d0d7e526b7b2ca54952cf1d6ff62ed401d096d4f52fa1eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18\"" Oct 27 23:39:37.664179 containerd[1544]: time="2025-10-27T23:39:37.664155898Z" level=info msg="StartContainer for \"f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18\"" Oct 27 23:39:37.667932 containerd[1544]: time="2025-10-27T23:39:37.667893658Z" level=info msg="connecting to shim f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18" address="unix:///run/containerd/s/96ca59171cc8e0edf2903786e981cdc7fe49b3c3f744bedf6dea12fc828f843c" protocol=ttrpc version=3 Oct 27 23:39:37.673945 systemd[1]: Started cri-containerd-ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca.scope - libcontainer container ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca. Oct 27 23:39:37.693959 systemd[1]: Started cri-containerd-f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18.scope - libcontainer container f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18. Oct 27 23:39:37.700790 containerd[1544]: time="2025-10-27T23:39:37.699452858Z" level=info msg="StartContainer for \"7c7bc2ce6141a9268affad07654c735178b138fc322203bcfc961eb7a9c7cd56\" returns successfully" Oct 27 23:39:37.737630 containerd[1544]: time="2025-10-27T23:39:37.736998098Z" level=info msg="StartContainer for \"ab9c94d7e2776023a030513f6c29485589b394bb5e69498094b2493c990c12ca\" returns successfully" Oct 27 23:39:37.754796 containerd[1544]: time="2025-10-27T23:39:37.754742458Z" level=info msg="StartContainer for \"f2430fe4febda9f906e69d0a046a9d1acff54d7c51e34803ca0a34a6e2157a18\" returns successfully" Oct 27 23:39:37.797033 kubelet[2312]: I1027 23:39:37.796996 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:39:38.065322 kubelet[2312]: E1027 23:39:38.065287 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:38.067499 kubelet[2312]: E1027 23:39:38.067385 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:38.071644 kubelet[2312]: E1027 23:39:38.071488 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:38.974082 kubelet[2312]: E1027 23:39:38.974042 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 23:39:39.018004 kubelet[2312]: I1027 23:39:39.017967 2312 apiserver.go:52] "Watching apiserver" Oct 27 23:39:39.073069 kubelet[2312]: E1027 23:39:39.073025 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:39.073731 kubelet[2312]: E1027 23:39:39.073501 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:39:39.132007 kubelet[2312]: I1027 23:39:39.131948 2312 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 23:39:39.143335 kubelet[2312]: I1027 23:39:39.143283 2312 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 23:39:39.232341 kubelet[2312]: I1027 23:39:39.231632 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:39.239284 kubelet[2312]: E1027 23:39:39.239245 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:39.239418 kubelet[2312]: I1027 23:39:39.239406 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:39.241798 kubelet[2312]: E1027 23:39:39.241341 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:39.241798 kubelet[2312]: I1027 23:39:39.241364 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:39.243379 kubelet[2312]: E1027 23:39:39.243333 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:40.980227 kubelet[2312]: I1027 23:39:40.979983 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:41.293779 kubelet[2312]: I1027 23:39:41.293725 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:41.498748 systemd[1]: Reload requested from client PID 2588 ('systemctl') (unit session-7.scope)... Oct 27 23:39:41.498765 systemd[1]: Reloading... Oct 27 23:39:41.558815 zram_generator::config[2631]: No configuration found. Oct 27 23:39:41.731190 systemd[1]: Reloading finished in 232 ms. Oct 27 23:39:41.761555 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:41.774850 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 23:39:41.775126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:41.775184 systemd[1]: kubelet.service: Consumed 1.103s CPU time, 129.9M memory peak. Oct 27 23:39:41.777011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:39:41.941745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:39:41.945693 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 23:39:41.995381 kubelet[2673]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:39:41.995381 kubelet[2673]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 23:39:41.995381 kubelet[2673]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:39:41.995728 kubelet[2673]: I1027 23:39:41.995367 2673 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 23:39:42.001693 kubelet[2673]: I1027 23:39:42.001437 2673 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 27 23:39:42.001693 kubelet[2673]: I1027 23:39:42.001467 2673 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 23:39:42.002001 kubelet[2673]: I1027 23:39:42.001985 2673 server.go:954] "Client rotation is on, will bootstrap in background" Oct 27 23:39:42.003351 kubelet[2673]: I1027 23:39:42.003331 2673 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 27 23:39:42.005656 kubelet[2673]: I1027 23:39:42.005634 2673 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 23:39:42.009093 kubelet[2673]: I1027 23:39:42.009070 2673 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 23:39:42.011646 kubelet[2673]: I1027 23:39:42.011625 2673 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 23:39:42.011877 kubelet[2673]: I1027 23:39:42.011851 2673 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 23:39:42.012061 kubelet[2673]: I1027 23:39:42.011879 2673 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 23:39:42.012136 kubelet[2673]: I1027 23:39:42.012071 2673 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 23:39:42.012136 kubelet[2673]: I1027 23:39:42.012081 2673 container_manager_linux.go:304] "Creating device plugin manager" Oct 27 23:39:42.012136 kubelet[2673]: I1027 23:39:42.012122 2673 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:39:42.012257 kubelet[2673]: I1027 23:39:42.012246 2673 kubelet.go:446] "Attempting to sync node with API server" Oct 27 23:39:42.012287 kubelet[2673]: I1027 23:39:42.012260 2673 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 23:39:42.012287 kubelet[2673]: I1027 23:39:42.012280 2673 kubelet.go:352] "Adding apiserver pod source" Oct 27 23:39:42.012325 kubelet[2673]: I1027 23:39:42.012289 2673 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 23:39:42.019839 kubelet[2673]: I1027 23:39:42.019811 2673 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 23:39:42.021054 kubelet[2673]: I1027 23:39:42.021027 2673 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 27 23:39:42.021532 kubelet[2673]: I1027 23:39:42.021512 2673 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 23:39:42.021564 kubelet[2673]: I1027 23:39:42.021549 2673 server.go:1287] "Started kubelet" Oct 27 23:39:42.023798 kubelet[2673]: I1027 23:39:42.022640 2673 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 23:39:42.023798 kubelet[2673]: I1027 23:39:42.022905 2673 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 23:39:42.023798 kubelet[2673]: I1027 23:39:42.022960 2673 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 23:39:42.023798 kubelet[2673]: I1027 23:39:42.023293 2673 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 23:39:42.023798 kubelet[2673]: I1027 23:39:42.023763 2673 server.go:479] "Adding debug handlers to kubelet server" Oct 27 23:39:42.024827 kubelet[2673]: I1027 23:39:42.024804 2673 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 23:39:42.027796 kubelet[2673]: E1027 23:39:42.026472 2673 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:39:42.027796 kubelet[2673]: I1027 23:39:42.026519 2673 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 23:39:42.027796 kubelet[2673]: I1027 23:39:42.026667 2673 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 23:39:42.027796 kubelet[2673]: I1027 23:39:42.026833 2673 reconciler.go:26] "Reconciler: start to sync state" Oct 27 23:39:42.028571 kubelet[2673]: I1027 23:39:42.028539 2673 factory.go:221] Registration of the systemd container factory successfully Oct 27 23:39:42.028672 kubelet[2673]: I1027 23:39:42.028650 2673 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 23:39:42.034798 kubelet[2673]: E1027 23:39:42.033071 2673 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 23:39:42.035055 kubelet[2673]: I1027 23:39:42.035031 2673 factory.go:221] Registration of the containerd container factory successfully Oct 27 23:39:42.047204 kubelet[2673]: I1027 23:39:42.047163 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 27 23:39:42.048557 kubelet[2673]: I1027 23:39:42.048534 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 27 23:39:42.048878 kubelet[2673]: I1027 23:39:42.048865 2673 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 27 23:39:42.049011 kubelet[2673]: I1027 23:39:42.048997 2673 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 23:39:42.049827 kubelet[2673]: I1027 23:39:42.049660 2673 kubelet.go:2382] "Starting kubelet main sync loop" Oct 27 23:39:42.049827 kubelet[2673]: E1027 23:39:42.049715 2673 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 23:39:42.076006 kubelet[2673]: I1027 23:39:42.075968 2673 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 23:39:42.076131 kubelet[2673]: I1027 23:39:42.076116 2673 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 23:39:42.076192 kubelet[2673]: I1027 23:39:42.076181 2673 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:39:42.076467 kubelet[2673]: I1027 23:39:42.076447 2673 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 23:39:42.076551 kubelet[2673]: I1027 23:39:42.076526 2673 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 23:39:42.076609 kubelet[2673]: I1027 23:39:42.076601 2673 policy_none.go:49] "None policy: Start" Oct 27 23:39:42.076658 kubelet[2673]: I1027 23:39:42.076649 2673 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 23:39:42.076709 kubelet[2673]: I1027 23:39:42.076702 2673 state_mem.go:35] "Initializing new in-memory state store" Oct 27 23:39:42.076908 kubelet[2673]: I1027 23:39:42.076891 2673 state_mem.go:75] "Updated machine memory state" Oct 27 23:39:42.081328 kubelet[2673]: I1027 23:39:42.081303 2673 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 27 23:39:42.081842 kubelet[2673]: I1027 23:39:42.081826 2673 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 23:39:42.082052 kubelet[2673]: I1027 23:39:42.082019 2673 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 23:39:42.082585 kubelet[2673]: I1027 23:39:42.082560 2673 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 23:39:42.084154 kubelet[2673]: E1027 23:39:42.084134 2673 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 23:39:42.151080 kubelet[2673]: I1027 23:39:42.151026 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:42.151080 kubelet[2673]: I1027 23:39:42.151027 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:42.151238 kubelet[2673]: I1027 23:39:42.151143 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.157303 kubelet[2673]: E1027 23:39:42.157263 2673 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:42.157959 kubelet[2673]: E1027 23:39:42.157938 2673 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.184570 kubelet[2673]: I1027 23:39:42.184548 2673 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:39:42.224259 kubelet[2673]: I1027 23:39:42.224227 2673 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 23:39:42.224397 kubelet[2673]: I1027 23:39:42.224314 2673 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 23:39:42.328037 kubelet[2673]: I1027 23:39:42.327950 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:42.328184 kubelet[2673]: I1027 23:39:42.328051 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:42.328184 kubelet[2673]: I1027 23:39:42.328077 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.328184 kubelet[2673]: I1027 23:39:42.328095 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.328184 kubelet[2673]: I1027 23:39:42.328110 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:39:42.328184 kubelet[2673]: I1027 23:39:42.328128 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.328655 kubelet[2673]: I1027 23:39:42.328630 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.328701 kubelet[2673]: I1027 23:39:42.328674 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:39:42.328701 kubelet[2673]: I1027 23:39:42.328693 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:43.013462 kubelet[2673]: I1027 23:39:43.013181 2673 apiserver.go:52] "Watching apiserver" Oct 27 23:39:43.027062 kubelet[2673]: I1027 23:39:43.027008 2673 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 23:39:43.065954 kubelet[2673]: I1027 23:39:43.065917 2673 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:43.073415 kubelet[2673]: E1027 23:39:43.073351 2673 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 23:39:43.100152 kubelet[2673]: I1027 23:39:43.099994 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.099976418 podStartE2EDuration="3.099976418s" podCreationTimestamp="2025-10-27 23:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:39:43.093020258 +0000 UTC m=+1.143643241" watchObservedRunningTime="2025-10-27 23:39:43.099976418 +0000 UTC m=+1.150599441" Oct 27 23:39:43.100152 kubelet[2673]: I1027 23:39:43.100125 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.100121058 podStartE2EDuration="1.100121058s" podCreationTimestamp="2025-10-27 23:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:39:43.100103618 +0000 UTC m=+1.150726601" watchObservedRunningTime="2025-10-27 23:39:43.100121058 +0000 UTC m=+1.150744081" Oct 27 23:39:43.123879 kubelet[2673]: I1027 23:39:43.123806 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.123789178 podStartE2EDuration="2.123789178s" podCreationTimestamp="2025-10-27 23:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:39:43.110977738 +0000 UTC m=+1.161600761" watchObservedRunningTime="2025-10-27 23:39:43.123789178 +0000 UTC m=+1.174412201" Oct 27 23:39:46.852691 kubelet[2673]: I1027 23:39:46.852660 2673 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 23:39:46.853469 containerd[1544]: time="2025-10-27T23:39:46.853436801Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 23:39:46.853811 kubelet[2673]: I1027 23:39:46.853793 2673 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 23:39:47.763013 kubelet[2673]: I1027 23:39:47.762969 2673 status_manager.go:890] "Failed to get status for pod" podUID="f354e225-cdda-4dfc-a627-9b463c5c3347" pod="kube-system/kube-proxy-lzdwm" err="pods \"kube-proxy-lzdwm\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Oct 27 23:39:47.768720 systemd[1]: Created slice kubepods-besteffort-podf354e225_cdda_4dfc_a627_9b463c5c3347.slice - libcontainer container kubepods-besteffort-podf354e225_cdda_4dfc_a627_9b463c5c3347.slice. Oct 27 23:39:47.862000 kubelet[2673]: I1027 23:39:47.861907 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f354e225-cdda-4dfc-a627-9b463c5c3347-xtables-lock\") pod \"kube-proxy-lzdwm\" (UID: \"f354e225-cdda-4dfc-a627-9b463c5c3347\") " pod="kube-system/kube-proxy-lzdwm" Oct 27 23:39:47.862000 kubelet[2673]: I1027 23:39:47.861994 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f354e225-cdda-4dfc-a627-9b463c5c3347-lib-modules\") pod \"kube-proxy-lzdwm\" (UID: \"f354e225-cdda-4dfc-a627-9b463c5c3347\") " pod="kube-system/kube-proxy-lzdwm" Oct 27 23:39:47.862377 kubelet[2673]: I1027 23:39:47.862036 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwhj2\" (UniqueName: \"kubernetes.io/projected/f354e225-cdda-4dfc-a627-9b463c5c3347-kube-api-access-rwhj2\") pod \"kube-proxy-lzdwm\" (UID: \"f354e225-cdda-4dfc-a627-9b463c5c3347\") " pod="kube-system/kube-proxy-lzdwm" Oct 27 23:39:47.862377 kubelet[2673]: I1027 23:39:47.862058 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f354e225-cdda-4dfc-a627-9b463c5c3347-kube-proxy\") pod \"kube-proxy-lzdwm\" (UID: \"f354e225-cdda-4dfc-a627-9b463c5c3347\") " pod="kube-system/kube-proxy-lzdwm" Oct 27 23:39:48.013307 systemd[1]: Created slice kubepods-besteffort-pod6d0e01d5_21c5_4af0_98d2_aa3076f1fc23.slice - libcontainer container kubepods-besteffort-pod6d0e01d5_21c5_4af0_98d2_aa3076f1fc23.slice. Oct 27 23:39:48.062794 kubelet[2673]: I1027 23:39:48.062724 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d0e01d5-21c5-4af0-98d2-aa3076f1fc23-var-lib-calico\") pod \"tigera-operator-7dcd859c48-bxrb4\" (UID: \"6d0e01d5-21c5-4af0-98d2-aa3076f1fc23\") " pod="tigera-operator/tigera-operator-7dcd859c48-bxrb4" Oct 27 23:39:48.062908 kubelet[2673]: I1027 23:39:48.062848 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmkn\" (UniqueName: \"kubernetes.io/projected/6d0e01d5-21c5-4af0-98d2-aa3076f1fc23-kube-api-access-6gmkn\") pod \"tigera-operator-7dcd859c48-bxrb4\" (UID: \"6d0e01d5-21c5-4af0-98d2-aa3076f1fc23\") " pod="tigera-operator/tigera-operator-7dcd859c48-bxrb4" Oct 27 23:39:48.082339 containerd[1544]: time="2025-10-27T23:39:48.082295260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzdwm,Uid:f354e225-cdda-4dfc-a627-9b463c5c3347,Namespace:kube-system,Attempt:0,}" Oct 27 23:39:48.099802 containerd[1544]: time="2025-10-27T23:39:48.099545557Z" level=info msg="connecting to shim 963304259fe7099c4c65c84d3fa929bf55f39d6b6c940007897945b2030524e8" address="unix:///run/containerd/s/f8ad4db17fe082394dae5a0821070350dae2371f37597f3b04e5c1a9a126e5cd" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:39:48.128016 systemd[1]: Started cri-containerd-963304259fe7099c4c65c84d3fa929bf55f39d6b6c940007897945b2030524e8.scope - libcontainer container 963304259fe7099c4c65c84d3fa929bf55f39d6b6c940007897945b2030524e8. Oct 27 23:39:48.151298 containerd[1544]: time="2025-10-27T23:39:48.151251489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzdwm,Uid:f354e225-cdda-4dfc-a627-9b463c5c3347,Namespace:kube-system,Attempt:0,} returns sandbox id \"963304259fe7099c4c65c84d3fa929bf55f39d6b6c940007897945b2030524e8\"" Oct 27 23:39:48.155828 containerd[1544]: time="2025-10-27T23:39:48.155791784Z" level=info msg="CreateContainer within sandbox \"963304259fe7099c4c65c84d3fa929bf55f39d6b6c940007897945b2030524e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 23:39:48.168789 containerd[1544]: time="2025-10-27T23:39:48.167974985Z" level=info msg="Container 3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:39:48.179281 containerd[1544]: time="2025-10-27T23:39:48.179224822Z" level=info msg="CreateContainer within sandbox \"963304259fe7099c4c65c84d3fa929bf55f39d6b6c940007897945b2030524e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67\"" Oct 27 23:39:48.180199 containerd[1544]: time="2025-10-27T23:39:48.180092145Z" level=info msg="StartContainer for \"3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67\"" Oct 27 23:39:48.183167 containerd[1544]: time="2025-10-27T23:39:48.183043315Z" level=info msg="connecting to shim 3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67" address="unix:///run/containerd/s/f8ad4db17fe082394dae5a0821070350dae2371f37597f3b04e5c1a9a126e5cd" protocol=ttrpc version=3 Oct 27 23:39:48.213994 systemd[1]: Started cri-containerd-3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67.scope - libcontainer container 3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67. Oct 27 23:39:48.248846 containerd[1544]: time="2025-10-27T23:39:48.248806814Z" level=info msg="StartContainer for \"3c009a71efb99f2ad5f34d99bbb4f7b00e5f29427c94d9511a6c91cc869d3c67\" returns successfully" Oct 27 23:39:48.317464 containerd[1544]: time="2025-10-27T23:39:48.317420642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bxrb4,Uid:6d0e01d5-21c5-4af0-98d2-aa3076f1fc23,Namespace:tigera-operator,Attempt:0,}" Oct 27 23:39:48.335417 containerd[1544]: time="2025-10-27T23:39:48.335369862Z" level=info msg="connecting to shim e0f46f8491136fa24b56fcbb4106c88da30c978c14692de0313e9dec1a8471d3" address="unix:///run/containerd/s/ae468969437cccbd0578c9be0e4076f7ef02922f264ba2b1af07b9ec21b31336" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:39:48.360955 systemd[1]: Started cri-containerd-e0f46f8491136fa24b56fcbb4106c88da30c978c14692de0313e9dec1a8471d3.scope - libcontainer container e0f46f8491136fa24b56fcbb4106c88da30c978c14692de0313e9dec1a8471d3. Oct 27 23:39:48.399800 containerd[1544]: time="2025-10-27T23:39:48.399718396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bxrb4,Uid:6d0e01d5-21c5-4af0-98d2-aa3076f1fc23,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0f46f8491136fa24b56fcbb4106c88da30c978c14692de0313e9dec1a8471d3\"" Oct 27 23:39:48.402060 containerd[1544]: time="2025-10-27T23:39:48.402024443Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 23:39:49.982260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704545181.mount: Deactivated successfully. Oct 27 23:39:50.710943 containerd[1544]: time="2025-10-27T23:39:50.710883392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:50.712476 containerd[1544]: time="2025-10-27T23:39:50.712420636Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 27 23:39:50.714274 containerd[1544]: time="2025-10-27T23:39:50.714199321Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:50.716904 containerd[1544]: time="2025-10-27T23:39:50.716607128Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:39:50.717320 containerd[1544]: time="2025-10-27T23:39:50.717298130Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.315232486s" Oct 27 23:39:50.717422 containerd[1544]: time="2025-10-27T23:39:50.717406611Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 27 23:39:50.721265 containerd[1544]: time="2025-10-27T23:39:50.721231062Z" level=info msg="CreateContainer within sandbox \"e0f46f8491136fa24b56fcbb4106c88da30c978c14692de0313e9dec1a8471d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 23:39:50.729383 containerd[1544]: time="2025-10-27T23:39:50.729316326Z" level=info msg="Container 25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:39:50.736876 containerd[1544]: time="2025-10-27T23:39:50.736826468Z" level=info msg="CreateContainer within sandbox \"e0f46f8491136fa24b56fcbb4106c88da30c978c14692de0313e9dec1a8471d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4\"" Oct 27 23:39:50.737805 containerd[1544]: time="2025-10-27T23:39:50.737545750Z" level=info msg="StartContainer for \"25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4\"" Oct 27 23:39:50.738879 containerd[1544]: time="2025-10-27T23:39:50.738847273Z" level=info msg="connecting to shim 25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4" address="unix:///run/containerd/s/ae468969437cccbd0578c9be0e4076f7ef02922f264ba2b1af07b9ec21b31336" protocol=ttrpc version=3 Oct 27 23:39:50.762996 systemd[1]: Started cri-containerd-25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4.scope - libcontainer container 25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4. Oct 27 23:39:50.792431 containerd[1544]: time="2025-10-27T23:39:50.792379710Z" level=info msg="StartContainer for \"25737f71e95409a0f66d8d00a508e2b23554e4df6d732be5b4e59c5be1698ba4\" returns successfully" Oct 27 23:39:51.090208 kubelet[2673]: I1027 23:39:51.090037 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzdwm" podStartSLOduration=4.090019245 podStartE2EDuration="4.090019245s" podCreationTimestamp="2025-10-27 23:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:39:49.086932704 +0000 UTC m=+7.137555727" watchObservedRunningTime="2025-10-27 23:39:51.090019245 +0000 UTC m=+9.140642268" Oct 27 23:39:54.050568 kubelet[2673]: I1027 23:39:54.050491 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-bxrb4" podStartSLOduration=4.732502499 podStartE2EDuration="7.050474674s" podCreationTimestamp="2025-10-27 23:39:47 +0000 UTC" firstStartedPulling="2025-10-27 23:39:48.401253521 +0000 UTC m=+6.451876504" lastFinishedPulling="2025-10-27 23:39:50.719225656 +0000 UTC m=+8.769848679" observedRunningTime="2025-10-27 23:39:51.090542486 +0000 UTC m=+9.141165509" watchObservedRunningTime="2025-10-27 23:39:54.050474674 +0000 UTC m=+12.101097697" Oct 27 23:39:56.132581 sudo[1751]: pam_unix(sudo:session): session closed for user root Oct 27 23:39:56.136682 sshd[1750]: Connection closed by 10.0.0.1 port 59952 Oct 27 23:39:56.137288 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Oct 27 23:39:56.145136 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:59952.service: Deactivated successfully. Oct 27 23:39:56.147111 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 23:39:56.149154 systemd[1]: session-7.scope: Consumed 5.855s CPU time, 218.4M memory peak. Oct 27 23:39:56.150461 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Oct 27 23:39:56.153048 systemd-logind[1526]: Removed session 7. Oct 27 23:39:58.060783 update_engine[1528]: I20251027 23:39:58.059793 1528 update_attempter.cc:509] Updating boot flags... Oct 27 23:40:02.884560 systemd[1]: Created slice kubepods-besteffort-pod4151b276_1a67_4516_bf4e_ce98df824bff.slice - libcontainer container kubepods-besteffort-pod4151b276_1a67_4516_bf4e_ce98df824bff.slice. Oct 27 23:40:03.042225 systemd[1]: Created slice kubepods-besteffort-pod934c9f31_b1b5_4c9c_b863_a4e9363863a8.slice - libcontainer container kubepods-besteffort-pod934c9f31_b1b5_4c9c_b863_a4e9363863a8.slice. Oct 27 23:40:03.062988 kubelet[2673]: I1027 23:40:03.062943 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4151b276-1a67-4516-bf4e-ce98df824bff-typha-certs\") pod \"calico-typha-bf6cc84fd-h4j4l\" (UID: \"4151b276-1a67-4516-bf4e-ce98df824bff\") " pod="calico-system/calico-typha-bf6cc84fd-h4j4l" Oct 27 23:40:03.063466 kubelet[2673]: I1027 23:40:03.063383 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4151b276-1a67-4516-bf4e-ce98df824bff-tigera-ca-bundle\") pod \"calico-typha-bf6cc84fd-h4j4l\" (UID: \"4151b276-1a67-4516-bf4e-ce98df824bff\") " pod="calico-system/calico-typha-bf6cc84fd-h4j4l" Oct 27 23:40:03.063466 kubelet[2673]: I1027 23:40:03.063414 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2sv\" (UniqueName: \"kubernetes.io/projected/4151b276-1a67-4516-bf4e-ce98df824bff-kube-api-access-fk2sv\") pod \"calico-typha-bf6cc84fd-h4j4l\" (UID: \"4151b276-1a67-4516-bf4e-ce98df824bff\") " pod="calico-system/calico-typha-bf6cc84fd-h4j4l" Oct 27 23:40:03.164860 kubelet[2673]: I1027 23:40:03.164029 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-cni-log-dir\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.164860 kubelet[2673]: I1027 23:40:03.164070 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-cni-net-dir\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.164860 kubelet[2673]: I1027 23:40:03.164090 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-flexvol-driver-host\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.164860 kubelet[2673]: I1027 23:40:03.164110 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-xtables-lock\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.164860 kubelet[2673]: I1027 23:40:03.164139 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lgf\" (UniqueName: \"kubernetes.io/projected/934c9f31-b1b5-4c9c-b863-a4e9363863a8-kube-api-access-k7lgf\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165052 kubelet[2673]: I1027 23:40:03.164159 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-cni-bin-dir\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165052 kubelet[2673]: I1027 23:40:03.164176 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/934c9f31-b1b5-4c9c-b863-a4e9363863a8-node-certs\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165052 kubelet[2673]: I1027 23:40:03.164211 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-lib-modules\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165052 kubelet[2673]: I1027 23:40:03.164225 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-var-run-calico\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165052 kubelet[2673]: I1027 23:40:03.164240 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-policysync\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165151 kubelet[2673]: I1027 23:40:03.164257 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/934c9f31-b1b5-4c9c-b863-a4e9363863a8-tigera-ca-bundle\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.165151 kubelet[2673]: I1027 23:40:03.164272 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/934c9f31-b1b5-4c9c-b863-a4e9363863a8-var-lib-calico\") pod \"calico-node-qmt97\" (UID: \"934c9f31-b1b5-4c9c-b863-a4e9363863a8\") " pod="calico-system/calico-node-qmt97" Oct 27 23:40:03.190554 kubelet[2673]: E1027 23:40:03.190430 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:03.190958 containerd[1544]: time="2025-10-27T23:40:03.190923767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf6cc84fd-h4j4l,Uid:4151b276-1a67-4516-bf4e-ce98df824bff,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:03.240675 containerd[1544]: time="2025-10-27T23:40:03.240086549Z" level=info msg="connecting to shim c1fcc421f254b59baaafa12af148385492d994e4d51db8e8f8dae236604cccd2" address="unix:///run/containerd/s/7e1b7e0944a456995254b0ea3c9a3e122f0f32d572bd21ccdcfba72436dc4eb2" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:03.241746 kubelet[2673]: E1027 23:40:03.241615 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:03.267658 kubelet[2673]: E1027 23:40:03.267521 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.267658 kubelet[2673]: W1027 23:40:03.267546 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.269595 kubelet[2673]: E1027 23:40:03.269557 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.270211 kubelet[2673]: E1027 23:40:03.270190 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.270251 kubelet[2673]: W1027 23:40:03.270210 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.270251 kubelet[2673]: E1027 23:40:03.270230 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.281005 kubelet[2673]: E1027 23:40:03.280964 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.281005 kubelet[2673]: W1027 23:40:03.280987 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.281616 kubelet[2673]: E1027 23:40:03.281006 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.302016 systemd[1]: Started cri-containerd-c1fcc421f254b59baaafa12af148385492d994e4d51db8e8f8dae236604cccd2.scope - libcontainer container c1fcc421f254b59baaafa12af148385492d994e4d51db8e8f8dae236604cccd2. Oct 27 23:40:03.345628 kubelet[2673]: E1027 23:40:03.345377 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:03.345973 containerd[1544]: time="2025-10-27T23:40:03.345929483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qmt97,Uid:934c9f31-b1b5-4c9c-b863-a4e9363863a8,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:03.366237 kubelet[2673]: E1027 23:40:03.366152 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.366237 kubelet[2673]: W1027 23:40:03.366175 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.366237 kubelet[2673]: E1027 23:40:03.366194 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.366593 kubelet[2673]: I1027 23:40:03.366444 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6363763-fd3b-49e5-96bd-c0e1b8f05225-socket-dir\") pod \"csi-node-driver-shccv\" (UID: \"c6363763-fd3b-49e5-96bd-c0e1b8f05225\") " pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:03.366702 kubelet[2673]: E1027 23:40:03.366689 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.366785 kubelet[2673]: W1027 23:40:03.366758 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.366868 kubelet[2673]: E1027 23:40:03.366856 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.367088 kubelet[2673]: E1027 23:40:03.367074 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.367158 kubelet[2673]: W1027 23:40:03.367146 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.367340 kubelet[2673]: E1027 23:40:03.367204 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.367340 kubelet[2673]: I1027 23:40:03.367228 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6363763-fd3b-49e5-96bd-c0e1b8f05225-kubelet-dir\") pod \"csi-node-driver-shccv\" (UID: \"c6363763-fd3b-49e5-96bd-c0e1b8f05225\") " pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:03.367622 kubelet[2673]: E1027 23:40:03.367580 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.367622 kubelet[2673]: W1027 23:40:03.367595 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.367622 kubelet[2673]: E1027 23:40:03.367605 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.368146 kubelet[2673]: E1027 23:40:03.368033 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.368146 kubelet[2673]: W1027 23:40:03.368047 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.368146 kubelet[2673]: E1027 23:40:03.368062 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.368607 kubelet[2673]: E1027 23:40:03.368476 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.368607 kubelet[2673]: W1027 23:40:03.368489 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.368607 kubelet[2673]: E1027 23:40:03.368508 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.368767 kubelet[2673]: E1027 23:40:03.368755 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.368873 kubelet[2673]: W1027 23:40:03.368859 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.368924 kubelet[2673]: E1027 23:40:03.368913 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.368994 kubelet[2673]: I1027 23:40:03.368982 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcbs\" (UniqueName: \"kubernetes.io/projected/c6363763-fd3b-49e5-96bd-c0e1b8f05225-kube-api-access-kjcbs\") pod \"csi-node-driver-shccv\" (UID: \"c6363763-fd3b-49e5-96bd-c0e1b8f05225\") " pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:03.369179 containerd[1544]: time="2025-10-27T23:40:03.369141632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf6cc84fd-h4j4l,Uid:4151b276-1a67-4516-bf4e-ce98df824bff,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1fcc421f254b59baaafa12af148385492d994e4d51db8e8f8dae236604cccd2\"" Oct 27 23:40:03.369299 kubelet[2673]: E1027 23:40:03.369279 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.369381 kubelet[2673]: W1027 23:40:03.369369 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.369802 kubelet[2673]: E1027 23:40:03.369684 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.369802 kubelet[2673]: I1027 23:40:03.369709 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6363763-fd3b-49e5-96bd-c0e1b8f05225-varrun\") pod \"csi-node-driver-shccv\" (UID: \"c6363763-fd3b-49e5-96bd-c0e1b8f05225\") " pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:03.370039 kubelet[2673]: E1027 23:40:03.370025 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.370213 kubelet[2673]: W1027 23:40:03.370099 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.370213 kubelet[2673]: E1027 23:40:03.370125 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.370213 kubelet[2673]: I1027 23:40:03.370147 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6363763-fd3b-49e5-96bd-c0e1b8f05225-registration-dir\") pod \"csi-node-driver-shccv\" (UID: \"c6363763-fd3b-49e5-96bd-c0e1b8f05225\") " pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:03.370213 kubelet[2673]: E1027 23:40:03.370165 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:03.370496 kubelet[2673]: E1027 23:40:03.370481 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.370565 kubelet[2673]: W1027 23:40:03.370554 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.370705 kubelet[2673]: E1027 23:40:03.370682 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.371725 kubelet[2673]: E1027 23:40:03.371707 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.372102 kubelet[2673]: W1027 23:40:03.372084 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.372334 kubelet[2673]: E1027 23:40:03.372265 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.372949 kubelet[2673]: E1027 23:40:03.372872 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.372949 kubelet[2673]: W1027 23:40:03.372888 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.373851 kubelet[2673]: E1027 23:40:03.373821 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.374146 kubelet[2673]: E1027 23:40:03.374099 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.374146 kubelet[2673]: W1027 23:40:03.374115 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.374146 kubelet[2673]: E1027 23:40:03.374130 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.374559 kubelet[2673]: E1027 23:40:03.374518 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.374559 kubelet[2673]: W1027 23:40:03.374533 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.374559 kubelet[2673]: E1027 23:40:03.374544 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.374960 kubelet[2673]: E1027 23:40:03.374917 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.374960 kubelet[2673]: W1027 23:40:03.374932 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.374960 kubelet[2673]: E1027 23:40:03.374943 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.375270 containerd[1544]: time="2025-10-27T23:40:03.375236680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 23:40:03.399880 containerd[1544]: time="2025-10-27T23:40:03.399836791Z" level=info msg="connecting to shim a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6" address="unix:///run/containerd/s/443fba344cb2dc4f0bdfd1c20022a54b9028f12a6918c787736d00b70b213c02" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:03.420969 systemd[1]: Started cri-containerd-a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6.scope - libcontainer container a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6. Oct 27 23:40:03.443279 containerd[1544]: time="2025-10-27T23:40:03.443234246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qmt97,Uid:934c9f31-b1b5-4c9c-b863-a4e9363863a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\"" Oct 27 23:40:03.443949 kubelet[2673]: E1027 23:40:03.443928 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:03.471423 kubelet[2673]: E1027 23:40:03.471395 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.471423 kubelet[2673]: W1027 23:40:03.471417 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.471578 kubelet[2673]: E1027 23:40:03.471437 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.471692 kubelet[2673]: E1027 23:40:03.471680 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.471723 kubelet[2673]: W1027 23:40:03.471692 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.471723 kubelet[2673]: E1027 23:40:03.471709 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.471984 kubelet[2673]: E1027 23:40:03.471970 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.471984 kubelet[2673]: W1027 23:40:03.471983 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.472049 kubelet[2673]: E1027 23:40:03.471998 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.472177 kubelet[2673]: E1027 23:40:03.472167 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.472177 kubelet[2673]: W1027 23:40:03.472177 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.472220 kubelet[2673]: E1027 23:40:03.472190 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.472381 kubelet[2673]: E1027 23:40:03.472369 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.472381 kubelet[2673]: W1027 23:40:03.472380 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.472450 kubelet[2673]: E1027 23:40:03.472392 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.472576 kubelet[2673]: E1027 23:40:03.472563 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.472610 kubelet[2673]: W1027 23:40:03.472577 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.472610 kubelet[2673]: E1027 23:40:03.472591 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.472733 kubelet[2673]: E1027 23:40:03.472721 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.472733 kubelet[2673]: W1027 23:40:03.472731 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.472868 kubelet[2673]: E1027 23:40:03.472764 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.472890 kubelet[2673]: E1027 23:40:03.472868 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.472890 kubelet[2673]: W1027 23:40:03.472875 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.472991 kubelet[2673]: E1027 23:40:03.472940 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.473070 kubelet[2673]: E1027 23:40:03.473057 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.473070 kubelet[2673]: W1027 23:40:03.473068 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.473122 kubelet[2673]: E1027 23:40:03.473098 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.473223 kubelet[2673]: E1027 23:40:03.473209 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.473223 kubelet[2673]: W1027 23:40:03.473221 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.473396 kubelet[2673]: E1027 23:40:03.473235 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.473501 kubelet[2673]: E1027 23:40:03.473485 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.473558 kubelet[2673]: W1027 23:40:03.473546 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.473626 kubelet[2673]: E1027 23:40:03.473614 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.473846 kubelet[2673]: E1027 23:40:03.473828 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.473846 kubelet[2673]: W1027 23:40:03.473843 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.474000 kubelet[2673]: E1027 23:40:03.473880 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.474088 kubelet[2673]: E1027 23:40:03.474075 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.474240 kubelet[2673]: W1027 23:40:03.474126 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.474240 kubelet[2673]: E1027 23:40:03.474148 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.474373 kubelet[2673]: E1027 23:40:03.474360 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.474435 kubelet[2673]: W1027 23:40:03.474423 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.474517 kubelet[2673]: E1027 23:40:03.474496 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.474855 kubelet[2673]: E1027 23:40:03.474841 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.474983 kubelet[2673]: W1027 23:40:03.474900 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.474983 kubelet[2673]: E1027 23:40:03.474930 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.475114 kubelet[2673]: E1027 23:40:03.475101 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.475170 kubelet[2673]: W1027 23:40:03.475160 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.475234 kubelet[2673]: E1027 23:40:03.475217 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.475524 kubelet[2673]: E1027 23:40:03.475397 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.475524 kubelet[2673]: W1027 23:40:03.475415 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.475524 kubelet[2673]: E1027 23:40:03.475441 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.475665 kubelet[2673]: E1027 23:40:03.475653 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.475719 kubelet[2673]: W1027 23:40:03.475708 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.475793 kubelet[2673]: E1027 23:40:03.475765 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.475977 kubelet[2673]: E1027 23:40:03.475964 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.476137 kubelet[2673]: W1027 23:40:03.476023 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.476137 kubelet[2673]: E1027 23:40:03.476046 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.476269 kubelet[2673]: E1027 23:40:03.476257 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.476326 kubelet[2673]: W1027 23:40:03.476306 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.476382 kubelet[2673]: E1027 23:40:03.476371 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.476602 kubelet[2673]: E1027 23:40:03.476586 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.476602 kubelet[2673]: W1027 23:40:03.476601 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.476665 kubelet[2673]: E1027 23:40:03.476616 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.476849 kubelet[2673]: E1027 23:40:03.476835 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.476849 kubelet[2673]: W1027 23:40:03.476848 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.477160 kubelet[2673]: E1027 23:40:03.476919 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.477160 kubelet[2673]: E1027 23:40:03.476990 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.477160 kubelet[2673]: W1027 23:40:03.476997 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.477160 kubelet[2673]: E1027 23:40:03.477023 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.477291 kubelet[2673]: E1027 23:40:03.477242 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.477291 kubelet[2673]: W1027 23:40:03.477249 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.477291 kubelet[2673]: E1027 23:40:03.477262 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.477486 kubelet[2673]: E1027 23:40:03.477464 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.477486 kubelet[2673]: W1027 23:40:03.477482 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.477538 kubelet[2673]: E1027 23:40:03.477492 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:03.490026 kubelet[2673]: E1027 23:40:03.489980 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:03.490026 kubelet[2673]: W1027 23:40:03.490003 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:03.490026 kubelet[2673]: E1027 23:40:03.490022 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:04.747920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993994140.mount: Deactivated successfully. Oct 27 23:40:05.050174 kubelet[2673]: E1027 23:40:05.050131 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:05.274631 containerd[1544]: time="2025-10-27T23:40:05.274050479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:05.274631 containerd[1544]: time="2025-10-27T23:40:05.274572039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 27 23:40:05.275429 containerd[1544]: time="2025-10-27T23:40:05.275401640Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:05.277446 containerd[1544]: time="2025-10-27T23:40:05.277411962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:05.277989 containerd[1544]: time="2025-10-27T23:40:05.277908443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.902632523s" Oct 27 23:40:05.277989 containerd[1544]: time="2025-10-27T23:40:05.277936643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 27 23:40:05.278869 containerd[1544]: time="2025-10-27T23:40:05.278840804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 23:40:05.299038 containerd[1544]: time="2025-10-27T23:40:05.298970786Z" level=info msg="CreateContainer within sandbox \"c1fcc421f254b59baaafa12af148385492d994e4d51db8e8f8dae236604cccd2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 23:40:05.307139 containerd[1544]: time="2025-10-27T23:40:05.307042835Z" level=info msg="Container 5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:40:05.315611 containerd[1544]: time="2025-10-27T23:40:05.315562605Z" level=info msg="CreateContainer within sandbox \"c1fcc421f254b59baaafa12af148385492d994e4d51db8e8f8dae236604cccd2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449\"" Oct 27 23:40:05.316233 containerd[1544]: time="2025-10-27T23:40:05.316190965Z" level=info msg="StartContainer for \"5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449\"" Oct 27 23:40:05.319330 containerd[1544]: time="2025-10-27T23:40:05.319237329Z" level=info msg="connecting to shim 5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449" address="unix:///run/containerd/s/7e1b7e0944a456995254b0ea3c9a3e122f0f32d572bd21ccdcfba72436dc4eb2" protocol=ttrpc version=3 Oct 27 23:40:05.344951 systemd[1]: Started cri-containerd-5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449.scope - libcontainer container 5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449. Oct 27 23:40:05.382490 containerd[1544]: time="2025-10-27T23:40:05.382452359Z" level=info msg="StartContainer for \"5d5d320654ad56c38273f8ab133ea12dab89029015d6519d0a3fddc57326b449\" returns successfully" Oct 27 23:40:06.116844 kubelet[2673]: E1027 23:40:06.116815 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:06.143423 kubelet[2673]: I1027 23:40:06.142035 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf6cc84fd-h4j4l" podStartSLOduration=2.235559905 podStartE2EDuration="4.142018553s" podCreationTimestamp="2025-10-27 23:40:02 +0000 UTC" firstStartedPulling="2025-10-27 23:40:03.372247356 +0000 UTC m=+21.422870379" lastFinishedPulling="2025-10-27 23:40:05.278706004 +0000 UTC m=+23.329329027" observedRunningTime="2025-10-27 23:40:06.141241112 +0000 UTC m=+24.191864175" watchObservedRunningTime="2025-10-27 23:40:06.142018553 +0000 UTC m=+24.192641576" Oct 27 23:40:06.187867 kubelet[2673]: E1027 23:40:06.187812 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.187867 kubelet[2673]: W1027 23:40:06.187838 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.187867 kubelet[2673]: E1027 23:40:06.187859 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.188042 kubelet[2673]: E1027 23:40:06.188032 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.188074 kubelet[2673]: W1027 23:40:06.188042 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.188095 kubelet[2673]: E1027 23:40:06.188076 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.188274 kubelet[2673]: E1027 23:40:06.188241 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.188274 kubelet[2673]: W1027 23:40:06.188254 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.188274 kubelet[2673]: E1027 23:40:06.188263 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.188444 kubelet[2673]: E1027 23:40:06.188418 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.188444 kubelet[2673]: W1027 23:40:06.188429 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.188444 kubelet[2673]: E1027 23:40:06.188437 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.188666 kubelet[2673]: E1027 23:40:06.188639 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.188666 kubelet[2673]: W1027 23:40:06.188655 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.188666 kubelet[2673]: E1027 23:40:06.188664 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.188814 kubelet[2673]: E1027 23:40:06.188803 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.188814 kubelet[2673]: W1027 23:40:06.188813 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.188867 kubelet[2673]: E1027 23:40:06.188821 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.188953 kubelet[2673]: E1027 23:40:06.188942 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.188976 kubelet[2673]: W1027 23:40:06.188952 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.188976 kubelet[2673]: E1027 23:40:06.188960 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.189083 kubelet[2673]: E1027 23:40:06.189074 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.189109 kubelet[2673]: W1027 23:40:06.189083 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.189109 kubelet[2673]: E1027 23:40:06.189091 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.189250 kubelet[2673]: E1027 23:40:06.189238 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.189250 kubelet[2673]: W1027 23:40:06.189248 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.189300 kubelet[2673]: E1027 23:40:06.189255 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.189444 kubelet[2673]: E1027 23:40:06.189404 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.189444 kubelet[2673]: W1027 23:40:06.189423 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.189444 kubelet[2673]: E1027 23:40:06.189432 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.189598 kubelet[2673]: E1027 23:40:06.189585 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.189621 kubelet[2673]: W1027 23:40:06.189597 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.189621 kubelet[2673]: E1027 23:40:06.189608 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.189766 kubelet[2673]: E1027 23:40:06.189756 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.189766 kubelet[2673]: W1027 23:40:06.189766 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.189766 kubelet[2673]: E1027 23:40:06.189783 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.189935 kubelet[2673]: E1027 23:40:06.189923 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.189935 kubelet[2673]: W1027 23:40:06.189933 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.189980 kubelet[2673]: E1027 23:40:06.189941 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.190109 kubelet[2673]: E1027 23:40:06.190097 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.190109 kubelet[2673]: W1027 23:40:06.190108 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.190163 kubelet[2673]: E1027 23:40:06.190116 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.190247 kubelet[2673]: E1027 23:40:06.190236 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.190247 kubelet[2673]: W1027 23:40:06.190245 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.190300 kubelet[2673]: E1027 23:40:06.190253 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.195643 kubelet[2673]: E1027 23:40:06.195606 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.195643 kubelet[2673]: W1027 23:40:06.195626 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.195643 kubelet[2673]: E1027 23:40:06.195641 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.196402 kubelet[2673]: E1027 23:40:06.196370 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.196402 kubelet[2673]: W1027 23:40:06.196389 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.196706 kubelet[2673]: E1027 23:40:06.196654 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.196823 kubelet[2673]: E1027 23:40:06.196809 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.196823 kubelet[2673]: W1027 23:40:06.196822 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.196890 kubelet[2673]: E1027 23:40:06.196844 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.197060 kubelet[2673]: E1027 23:40:06.197031 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.197060 kubelet[2673]: W1027 23:40:06.197045 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.197060 kubelet[2673]: E1027 23:40:06.197058 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.197277 kubelet[2673]: E1027 23:40:06.197265 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.197277 kubelet[2673]: W1027 23:40:06.197275 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.197348 kubelet[2673]: E1027 23:40:06.197292 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.197544 kubelet[2673]: E1027 23:40:06.197509 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.197544 kubelet[2673]: W1027 23:40:06.197525 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.197784 kubelet[2673]: E1027 23:40:06.197735 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.198044 kubelet[2673]: E1027 23:40:06.198026 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.198075 kubelet[2673]: W1027 23:40:06.198043 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.198106 kubelet[2673]: E1027 23:40:06.198085 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.198256 kubelet[2673]: E1027 23:40:06.198244 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.198602 kubelet[2673]: W1027 23:40:06.198257 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.198602 kubelet[2673]: E1027 23:40:06.198279 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.198602 kubelet[2673]: E1027 23:40:06.198444 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.198602 kubelet[2673]: W1027 23:40:06.198459 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.198602 kubelet[2673]: E1027 23:40:06.198481 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.199516 kubelet[2673]: E1027 23:40:06.199493 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.199516 kubelet[2673]: W1027 23:40:06.199512 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.199575 kubelet[2673]: E1027 23:40:06.199535 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.199799 kubelet[2673]: E1027 23:40:06.199761 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.199896 kubelet[2673]: W1027 23:40:06.199798 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.199896 kubelet[2673]: E1027 23:40:06.199827 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.200458 kubelet[2673]: E1027 23:40:06.200009 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.200458 kubelet[2673]: W1027 23:40:06.200021 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.200458 kubelet[2673]: E1027 23:40:06.200057 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.200458 kubelet[2673]: E1027 23:40:06.200196 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.200458 kubelet[2673]: W1027 23:40:06.200205 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.200458 kubelet[2673]: E1027 23:40:06.200278 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.200458 kubelet[2673]: E1027 23:40:06.200379 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.200458 kubelet[2673]: W1027 23:40:06.200388 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.200458 kubelet[2673]: E1027 23:40:06.200403 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.200658 kubelet[2673]: E1027 23:40:06.200596 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.200658 kubelet[2673]: W1027 23:40:06.200613 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.200658 kubelet[2673]: E1027 23:40:06.200629 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.200976 kubelet[2673]: E1027 23:40:06.200824 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.200976 kubelet[2673]: W1027 23:40:06.200850 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.200976 kubelet[2673]: E1027 23:40:06.200861 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.201067 kubelet[2673]: E1027 23:40:06.201051 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.201094 kubelet[2673]: W1027 23:40:06.201062 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.201094 kubelet[2673]: E1027 23:40:06.201079 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.202128 kubelet[2673]: E1027 23:40:06.201633 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 23:40:06.202128 kubelet[2673]: W1027 23:40:06.201670 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 23:40:06.202128 kubelet[2673]: E1027 23:40:06.201685 2673 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 23:40:06.316170 containerd[1544]: time="2025-10-27T23:40:06.316113054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:06.316737 containerd[1544]: time="2025-10-27T23:40:06.316698535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 27 23:40:06.317617 containerd[1544]: time="2025-10-27T23:40:06.317565616Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:06.319561 containerd[1544]: time="2025-10-27T23:40:06.319524898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:06.320164 containerd[1544]: time="2025-10-27T23:40:06.320133218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.041263174s" Oct 27 23:40:06.320192 containerd[1544]: time="2025-10-27T23:40:06.320171138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 27 23:40:06.321913 containerd[1544]: time="2025-10-27T23:40:06.321883580Z" level=info msg="CreateContainer within sandbox \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 23:40:06.339490 containerd[1544]: time="2025-10-27T23:40:06.339065478Z" level=info msg="Container 06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:40:06.348200 containerd[1544]: time="2025-10-27T23:40:06.348137968Z" level=info msg="CreateContainer within sandbox \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\"" Oct 27 23:40:06.348961 containerd[1544]: time="2025-10-27T23:40:06.348916768Z" level=info msg="StartContainer for \"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\"" Oct 27 23:40:06.351736 containerd[1544]: time="2025-10-27T23:40:06.350963370Z" level=info msg="connecting to shim 06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e" address="unix:///run/containerd/s/443fba344cb2dc4f0bdfd1c20022a54b9028f12a6918c787736d00b70b213c02" protocol=ttrpc version=3 Oct 27 23:40:06.374993 systemd[1]: Started cri-containerd-06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e.scope - libcontainer container 06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e. Oct 27 23:40:06.444943 containerd[1544]: time="2025-10-27T23:40:06.444359348Z" level=info msg="StartContainer for \"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\" returns successfully" Oct 27 23:40:06.463414 systemd[1]: cri-containerd-06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e.scope: Deactivated successfully. Oct 27 23:40:06.463678 systemd[1]: cri-containerd-06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e.scope: Consumed 31ms CPU time, 6.3M memory peak, 1M written to disk. Oct 27 23:40:06.485631 containerd[1544]: time="2025-10-27T23:40:06.485588471Z" level=info msg="received exit event container_id:\"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\" id:\"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\" pid:3345 exited_at:{seconds:1761608406 nanos:480499585}" Oct 27 23:40:06.485836 containerd[1544]: time="2025-10-27T23:40:06.485658391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\" id:\"06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e\" pid:3345 exited_at:{seconds:1761608406 nanos:480499585}" Oct 27 23:40:06.591883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06aacb552cfd779089b7801b4ad068c9d9606a550d18bbbfc8df64554845b17e-rootfs.mount: Deactivated successfully. Oct 27 23:40:07.050159 kubelet[2673]: E1027 23:40:07.050085 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:07.120817 kubelet[2673]: I1027 23:40:07.120677 2673 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 23:40:07.121143 kubelet[2673]: E1027 23:40:07.121009 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:07.121928 kubelet[2673]: E1027 23:40:07.121848 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:07.123495 containerd[1544]: time="2025-10-27T23:40:07.123452167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 23:40:09.050937 kubelet[2673]: E1027 23:40:09.050836 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:10.844196 kubelet[2673]: I1027 23:40:10.844118 2673 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 23:40:10.844888 kubelet[2673]: E1027 23:40:10.844867 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:11.050362 kubelet[2673]: E1027 23:40:11.050278 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:11.084430 containerd[1544]: time="2025-10-27T23:40:11.084364264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:11.085123 containerd[1544]: time="2025-10-27T23:40:11.085096785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 27 23:40:11.086211 containerd[1544]: time="2025-10-27T23:40:11.086169345Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:11.090625 containerd[1544]: time="2025-10-27T23:40:11.090584269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:11.091535 containerd[1544]: time="2025-10-27T23:40:11.091476789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.967891182s" Oct 27 23:40:11.091535 containerd[1544]: time="2025-10-27T23:40:11.091527869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 27 23:40:11.095414 containerd[1544]: time="2025-10-27T23:40:11.095314352Z" level=info msg="CreateContainer within sandbox \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 23:40:11.107119 containerd[1544]: time="2025-10-27T23:40:11.105407200Z" level=info msg="Container 26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:40:11.115469 containerd[1544]: time="2025-10-27T23:40:11.115411807Z" level=info msg="CreateContainer within sandbox \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\"" Oct 27 23:40:11.115921 containerd[1544]: time="2025-10-27T23:40:11.115891768Z" level=info msg="StartContainer for \"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\"" Oct 27 23:40:11.118746 containerd[1544]: time="2025-10-27T23:40:11.117981129Z" level=info msg="connecting to shim 26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8" address="unix:///run/containerd/s/443fba344cb2dc4f0bdfd1c20022a54b9028f12a6918c787736d00b70b213c02" protocol=ttrpc version=3 Oct 27 23:40:11.135019 kubelet[2673]: E1027 23:40:11.134988 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:11.145031 systemd[1]: Started cri-containerd-26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8.scope - libcontainer container 26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8. Oct 27 23:40:11.190681 containerd[1544]: time="2025-10-27T23:40:11.190567184Z" level=info msg="StartContainer for \"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\" returns successfully" Oct 27 23:40:11.817922 systemd[1]: cri-containerd-26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8.scope: Deactivated successfully. Oct 27 23:40:11.818197 systemd[1]: cri-containerd-26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8.scope: Consumed 463ms CPU time, 179.6M memory peak, 2.6M read from disk, 165.9M written to disk. Oct 27 23:40:11.819706 containerd[1544]: time="2025-10-27T23:40:11.819608218Z" level=info msg="received exit event container_id:\"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\" id:\"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\" pid:3405 exited_at:{seconds:1761608411 nanos:819365338}" Oct 27 23:40:11.819706 containerd[1544]: time="2025-10-27T23:40:11.819676219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\" id:\"26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8\" pid:3405 exited_at:{seconds:1761608411 nanos:819365338}" Oct 27 23:40:11.829794 kubelet[2673]: I1027 23:40:11.829355 2673 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 23:40:11.844519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26d16f4866352ab4f9ea402f9edd751760a985a260b148f92eedce3061051df8-rootfs.mount: Deactivated successfully. Oct 27 23:40:11.884674 systemd[1]: Created slice kubepods-burstable-pod8cfb3dc7_ab65_4eb3_9eff_4a740cb4ca27.slice - libcontainer container kubepods-burstable-pod8cfb3dc7_ab65_4eb3_9eff_4a740cb4ca27.slice. Oct 27 23:40:11.890595 systemd[1]: Created slice kubepods-burstable-pod95482e7a_9a23_4b52_8975_1de0f3e95885.slice - libcontainer container kubepods-burstable-pod95482e7a_9a23_4b52_8975_1de0f3e95885.slice. Oct 27 23:40:11.896184 systemd[1]: Created slice kubepods-besteffort-pod7eea9394_537e_498e_8ee0_ada3b969c833.slice - libcontainer container kubepods-besteffort-pod7eea9394_537e_498e_8ee0_ada3b969c833.slice. Oct 27 23:40:11.904176 systemd[1]: Created slice kubepods-besteffort-pod6f5865ca_3aea_44fa_9144_072fef5dde02.slice - libcontainer container kubepods-besteffort-pod6f5865ca_3aea_44fa_9144_072fef5dde02.slice. Oct 27 23:40:11.914911 systemd[1]: Created slice kubepods-besteffort-podf28f980f_552d_4708_ba17_813aa6dc44ab.slice - libcontainer container kubepods-besteffort-podf28f980f_552d_4708_ba17_813aa6dc44ab.slice. Oct 27 23:40:11.923237 systemd[1]: Created slice kubepods-besteffort-podeca582f5_cc8d_425a_955f_92ba936703d3.slice - libcontainer container kubepods-besteffort-podeca582f5_cc8d_425a_955f_92ba936703d3.slice. Oct 27 23:40:11.928612 systemd[1]: Created slice kubepods-besteffort-pod8e5a6c05_e53d_438e_b01a_ad6295f7d8ed.slice - libcontainer container kubepods-besteffort-pod8e5a6c05_e53d_438e_b01a_ad6295f7d8ed.slice. Oct 27 23:40:11.937669 kubelet[2673]: I1027 23:40:11.937627 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27-config-volume\") pod \"coredns-668d6bf9bc-lv66f\" (UID: \"8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27\") " pod="kube-system/coredns-668d6bf9bc-lv66f" Oct 27 23:40:11.938055 kubelet[2673]: I1027 23:40:11.937672 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7eea9394-537e-498e-8ee0-ada3b969c833-calico-apiserver-certs\") pod \"calico-apiserver-6f59658cf9-fcbh6\" (UID: \"7eea9394-537e-498e-8ee0-ada3b969c833\") " pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" Oct 27 23:40:11.938055 kubelet[2673]: I1027 23:40:11.937705 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdrp\" (UniqueName: \"kubernetes.io/projected/7eea9394-537e-498e-8ee0-ada3b969c833-kube-api-access-6bdrp\") pod \"calico-apiserver-6f59658cf9-fcbh6\" (UID: \"7eea9394-537e-498e-8ee0-ada3b969c833\") " pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" Oct 27 23:40:11.938055 kubelet[2673]: I1027 23:40:11.937810 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvrrw\" (UniqueName: \"kubernetes.io/projected/95482e7a-9a23-4b52-8975-1de0f3e95885-kube-api-access-mvrrw\") pod \"coredns-668d6bf9bc-xxn6f\" (UID: \"95482e7a-9a23-4b52-8975-1de0f3e95885\") " pod="kube-system/coredns-668d6bf9bc-xxn6f" Oct 27 23:40:11.938055 kubelet[2673]: I1027 23:40:11.937880 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-ca-bundle\") pod \"whisker-b6759447d-6lmw6\" (UID: \"f28f980f-552d-4708-ba17-813aa6dc44ab\") " pod="calico-system/whisker-b6759447d-6lmw6" Oct 27 23:40:11.938055 kubelet[2673]: I1027 23:40:11.937919 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eca582f5-cc8d-425a-955f-92ba936703d3-goldmane-key-pair\") pod \"goldmane-666569f655-9qxlt\" (UID: \"eca582f5-cc8d-425a-955f-92ba936703d3\") " pod="calico-system/goldmane-666569f655-9qxlt" Oct 27 23:40:11.938163 kubelet[2673]: I1027 23:40:11.937969 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eca582f5-cc8d-425a-955f-92ba936703d3-goldmane-ca-bundle\") pod \"goldmane-666569f655-9qxlt\" (UID: \"eca582f5-cc8d-425a-955f-92ba936703d3\") " pod="calico-system/goldmane-666569f655-9qxlt" Oct 27 23:40:11.938163 kubelet[2673]: I1027 23:40:11.937996 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95482e7a-9a23-4b52-8975-1de0f3e95885-config-volume\") pod \"coredns-668d6bf9bc-xxn6f\" (UID: \"95482e7a-9a23-4b52-8975-1de0f3e95885\") " pod="kube-system/coredns-668d6bf9bc-xxn6f" Oct 27 23:40:11.938163 kubelet[2673]: I1027 23:40:11.938013 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgbnk\" (UniqueName: \"kubernetes.io/projected/eca582f5-cc8d-425a-955f-92ba936703d3-kube-api-access-rgbnk\") pod \"goldmane-666569f655-9qxlt\" (UID: \"eca582f5-cc8d-425a-955f-92ba936703d3\") " pod="calico-system/goldmane-666569f655-9qxlt" Oct 27 23:40:11.938163 kubelet[2673]: I1027 23:40:11.938030 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f5865ca-3aea-44fa-9144-072fef5dde02-tigera-ca-bundle\") pod \"calico-kube-controllers-c698d47cd-h2pk7\" (UID: \"6f5865ca-3aea-44fa-9144-072fef5dde02\") " pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" Oct 27 23:40:11.938163 kubelet[2673]: I1027 23:40:11.938056 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2bb\" (UniqueName: \"kubernetes.io/projected/6f5865ca-3aea-44fa-9144-072fef5dde02-kube-api-access-5w2bb\") pod \"calico-kube-controllers-c698d47cd-h2pk7\" (UID: \"6f5865ca-3aea-44fa-9144-072fef5dde02\") " pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" Oct 27 23:40:11.938263 kubelet[2673]: I1027 23:40:11.938111 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e5a6c05-e53d-438e-b01a-ad6295f7d8ed-calico-apiserver-certs\") pod \"calico-apiserver-6f59658cf9-w94zx\" (UID: \"8e5a6c05-e53d-438e-b01a-ad6295f7d8ed\") " pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" Oct 27 23:40:11.938263 kubelet[2673]: I1027 23:40:11.938130 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9mz\" (UniqueName: \"kubernetes.io/projected/8e5a6c05-e53d-438e-b01a-ad6295f7d8ed-kube-api-access-9t9mz\") pod \"calico-apiserver-6f59658cf9-w94zx\" (UID: \"8e5a6c05-e53d-438e-b01a-ad6295f7d8ed\") " pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" Oct 27 23:40:11.938263 kubelet[2673]: I1027 23:40:11.938175 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjd4k\" (UniqueName: \"kubernetes.io/projected/f28f980f-552d-4708-ba17-813aa6dc44ab-kube-api-access-xjd4k\") pod \"whisker-b6759447d-6lmw6\" (UID: \"f28f980f-552d-4708-ba17-813aa6dc44ab\") " pod="calico-system/whisker-b6759447d-6lmw6" Oct 27 23:40:11.938263 kubelet[2673]: I1027 23:40:11.938196 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca582f5-cc8d-425a-955f-92ba936703d3-config\") pod \"goldmane-666569f655-9qxlt\" (UID: \"eca582f5-cc8d-425a-955f-92ba936703d3\") " pod="calico-system/goldmane-666569f655-9qxlt" Oct 27 23:40:11.938263 kubelet[2673]: I1027 23:40:11.938212 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-backend-key-pair\") pod \"whisker-b6759447d-6lmw6\" (UID: \"f28f980f-552d-4708-ba17-813aa6dc44ab\") " pod="calico-system/whisker-b6759447d-6lmw6" Oct 27 23:40:11.938373 kubelet[2673]: I1027 23:40:11.938230 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxpk\" (UniqueName: \"kubernetes.io/projected/8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27-kube-api-access-dqxpk\") pod \"coredns-668d6bf9bc-lv66f\" (UID: \"8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27\") " pod="kube-system/coredns-668d6bf9bc-lv66f" Oct 27 23:40:12.139849 kubelet[2673]: E1027 23:40:12.139700 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:12.141711 containerd[1544]: time="2025-10-27T23:40:12.141673855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 23:40:12.188037 kubelet[2673]: E1027 23:40:12.188006 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:12.189647 containerd[1544]: time="2025-10-27T23:40:12.188531208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lv66f,Uid:8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27,Namespace:kube-system,Attempt:0,}" Oct 27 23:40:12.193876 kubelet[2673]: E1027 23:40:12.193806 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:12.197106 containerd[1544]: time="2025-10-27T23:40:12.194657092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxn6f,Uid:95482e7a-9a23-4b52-8975-1de0f3e95885,Namespace:kube-system,Attempt:0,}" Oct 27 23:40:12.200644 containerd[1544]: time="2025-10-27T23:40:12.199644416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-fcbh6,Uid:7eea9394-537e-498e-8ee0-ada3b969c833,Namespace:calico-apiserver,Attempt:0,}" Oct 27 23:40:12.214637 containerd[1544]: time="2025-10-27T23:40:12.214600186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c698d47cd-h2pk7,Uid:6f5865ca-3aea-44fa-9144-072fef5dde02,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:12.221941 containerd[1544]: time="2025-10-27T23:40:12.221880951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6759447d-6lmw6,Uid:f28f980f-552d-4708-ba17-813aa6dc44ab,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:12.228346 containerd[1544]: time="2025-10-27T23:40:12.228302556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9qxlt,Uid:eca582f5-cc8d-425a-955f-92ba936703d3,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:12.232853 containerd[1544]: time="2025-10-27T23:40:12.232809199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-w94zx,Uid:8e5a6c05-e53d-438e-b01a-ad6295f7d8ed,Namespace:calico-apiserver,Attempt:0,}" Oct 27 23:40:12.335536 containerd[1544]: time="2025-10-27T23:40:12.335481472Z" level=error msg="Failed to destroy network for sandbox \"8a53c9715dbf0798f4610e5e74eeb867696c157f7fa46006a93d93456fe7a418\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.340794 containerd[1544]: time="2025-10-27T23:40:12.340721996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lv66f,Uid:8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a53c9715dbf0798f4610e5e74eeb867696c157f7fa46006a93d93456fe7a418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.347814 kubelet[2673]: E1027 23:40:12.347737 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a53c9715dbf0798f4610e5e74eeb867696c157f7fa46006a93d93456fe7a418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.348159 kubelet[2673]: E1027 23:40:12.348133 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a53c9715dbf0798f4610e5e74eeb867696c157f7fa46006a93d93456fe7a418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lv66f" Oct 27 23:40:12.348243 kubelet[2673]: E1027 23:40:12.348229 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a53c9715dbf0798f4610e5e74eeb867696c157f7fa46006a93d93456fe7a418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lv66f" Oct 27 23:40:12.348721 kubelet[2673]: E1027 23:40:12.348351 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lv66f_kube-system(8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lv66f_kube-system(8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a53c9715dbf0798f4610e5e74eeb867696c157f7fa46006a93d93456fe7a418\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lv66f" podUID="8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27" Oct 27 23:40:12.349270 containerd[1544]: time="2025-10-27T23:40:12.349190881Z" level=error msg="Failed to destroy network for sandbox \"9f0d85ba91132d0cd683facddfec084ea54781d109b23e06fb5c06040e526836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.352279 containerd[1544]: time="2025-10-27T23:40:12.352227044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9qxlt,Uid:eca582f5-cc8d-425a-955f-92ba936703d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0d85ba91132d0cd683facddfec084ea54781d109b23e06fb5c06040e526836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.352836 kubelet[2673]: E1027 23:40:12.352715 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0d85ba91132d0cd683facddfec084ea54781d109b23e06fb5c06040e526836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.352836 kubelet[2673]: E1027 23:40:12.352781 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0d85ba91132d0cd683facddfec084ea54781d109b23e06fb5c06040e526836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9qxlt" Oct 27 23:40:12.352836 kubelet[2673]: E1027 23:40:12.352803 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0d85ba91132d0cd683facddfec084ea54781d109b23e06fb5c06040e526836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9qxlt" Oct 27 23:40:12.353058 kubelet[2673]: E1027 23:40:12.353004 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9qxlt_calico-system(eca582f5-cc8d-425a-955f-92ba936703d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9qxlt_calico-system(eca582f5-cc8d-425a-955f-92ba936703d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f0d85ba91132d0cd683facddfec084ea54781d109b23e06fb5c06040e526836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:40:12.362121 containerd[1544]: time="2025-10-27T23:40:12.362001411Z" level=error msg="Failed to destroy network for sandbox \"0a5028d9819c83ab192ff4ffbfc830b33c4efda2e9a9ff4908ac97175c1c3181\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.363959 containerd[1544]: time="2025-10-27T23:40:12.363912292Z" level=error msg="Failed to destroy network for sandbox \"dd78d418a3622bfb5eff7984ffe304539f6154a68390741da0a894847ee1875c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.365145 containerd[1544]: time="2025-10-27T23:40:12.365079093Z" level=error msg="Failed to destroy network for sandbox \"5ac7434c7ca83d1d6315232baf58e8408eac20724fc5627c430d2e0b163b2c05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.365243 containerd[1544]: time="2025-10-27T23:40:12.365095093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-w94zx,Uid:8e5a6c05-e53d-438e-b01a-ad6295f7d8ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5028d9819c83ab192ff4ffbfc830b33c4efda2e9a9ff4908ac97175c1c3181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.365596 kubelet[2673]: E1027 23:40:12.365549 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5028d9819c83ab192ff4ffbfc830b33c4efda2e9a9ff4908ac97175c1c3181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.365668 kubelet[2673]: E1027 23:40:12.365622 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5028d9819c83ab192ff4ffbfc830b33c4efda2e9a9ff4908ac97175c1c3181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" Oct 27 23:40:12.365668 kubelet[2673]: E1027 23:40:12.365644 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5028d9819c83ab192ff4ffbfc830b33c4efda2e9a9ff4908ac97175c1c3181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" Oct 27 23:40:12.365725 kubelet[2673]: E1027 23:40:12.365688 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f59658cf9-w94zx_calico-apiserver(8e5a6c05-e53d-438e-b01a-ad6295f7d8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f59658cf9-w94zx_calico-apiserver(8e5a6c05-e53d-438e-b01a-ad6295f7d8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a5028d9819c83ab192ff4ffbfc830b33c4efda2e9a9ff4908ac97175c1c3181\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:40:12.366411 containerd[1544]: time="2025-10-27T23:40:12.366264494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-fcbh6,Uid:7eea9394-537e-498e-8ee0-ada3b969c833,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd78d418a3622bfb5eff7984ffe304539f6154a68390741da0a894847ee1875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.366586 kubelet[2673]: E1027 23:40:12.366557 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd78d418a3622bfb5eff7984ffe304539f6154a68390741da0a894847ee1875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.366656 kubelet[2673]: E1027 23:40:12.366601 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd78d418a3622bfb5eff7984ffe304539f6154a68390741da0a894847ee1875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" Oct 27 23:40:12.366656 kubelet[2673]: E1027 23:40:12.366619 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd78d418a3622bfb5eff7984ffe304539f6154a68390741da0a894847ee1875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" Oct 27 23:40:12.366711 kubelet[2673]: E1027 23:40:12.366655 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f59658cf9-fcbh6_calico-apiserver(7eea9394-537e-498e-8ee0-ada3b969c833)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f59658cf9-fcbh6_calico-apiserver(7eea9394-537e-498e-8ee0-ada3b969c833)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd78d418a3622bfb5eff7984ffe304539f6154a68390741da0a894847ee1875c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:40:12.366987 containerd[1544]: time="2025-10-27T23:40:12.366949854Z" level=error msg="Failed to destroy network for sandbox \"5a61753a7e9d0c3dca87e0866e2aa4cb28d0a3cbfabd51993067215000289664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.369830 containerd[1544]: time="2025-10-27T23:40:12.369616136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c698d47cd-h2pk7,Uid:6f5865ca-3aea-44fa-9144-072fef5dde02,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7434c7ca83d1d6315232baf58e8408eac20724fc5627c430d2e0b163b2c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.370023 kubelet[2673]: E1027 23:40:12.369981 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7434c7ca83d1d6315232baf58e8408eac20724fc5627c430d2e0b163b2c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.370070 kubelet[2673]: E1027 23:40:12.370038 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7434c7ca83d1d6315232baf58e8408eac20724fc5627c430d2e0b163b2c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" Oct 27 23:40:12.370070 kubelet[2673]: E1027 23:40:12.370058 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac7434c7ca83d1d6315232baf58e8408eac20724fc5627c430d2e0b163b2c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" Oct 27 23:40:12.371032 kubelet[2673]: E1027 23:40:12.370625 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c698d47cd-h2pk7_calico-system(6f5865ca-3aea-44fa-9144-072fef5dde02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c698d47cd-h2pk7_calico-system(6f5865ca-3aea-44fa-9144-072fef5dde02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ac7434c7ca83d1d6315232baf58e8408eac20724fc5627c430d2e0b163b2c05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:40:12.373871 containerd[1544]: time="2025-10-27T23:40:12.373749339Z" level=error msg="Failed to destroy network for sandbox \"aa092658acf0f3072cd3a6f33c7a8f1b6a51d5ee0c22c2c1ac9dd470506b26c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.375033 containerd[1544]: time="2025-10-27T23:40:12.374993380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxn6f,Uid:95482e7a-9a23-4b52-8975-1de0f3e95885,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a61753a7e9d0c3dca87e0866e2aa4cb28d0a3cbfabd51993067215000289664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.375330 kubelet[2673]: E1027 23:40:12.375279 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a61753a7e9d0c3dca87e0866e2aa4cb28d0a3cbfabd51993067215000289664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.375441 kubelet[2673]: E1027 23:40:12.375362 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a61753a7e9d0c3dca87e0866e2aa4cb28d0a3cbfabd51993067215000289664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xxn6f" Oct 27 23:40:12.375441 kubelet[2673]: E1027 23:40:12.375388 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a61753a7e9d0c3dca87e0866e2aa4cb28d0a3cbfabd51993067215000289664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xxn6f" Oct 27 23:40:12.375491 kubelet[2673]: E1027 23:40:12.375432 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xxn6f_kube-system(95482e7a-9a23-4b52-8975-1de0f3e95885)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xxn6f_kube-system(95482e7a-9a23-4b52-8975-1de0f3e95885)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a61753a7e9d0c3dca87e0866e2aa4cb28d0a3cbfabd51993067215000289664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xxn6f" podUID="95482e7a-9a23-4b52-8975-1de0f3e95885" Oct 27 23:40:12.376092 containerd[1544]: time="2025-10-27T23:40:12.376049540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6759447d-6lmw6,Uid:f28f980f-552d-4708-ba17-813aa6dc44ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa092658acf0f3072cd3a6f33c7a8f1b6a51d5ee0c22c2c1ac9dd470506b26c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.376259 kubelet[2673]: E1027 23:40:12.376234 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa092658acf0f3072cd3a6f33c7a8f1b6a51d5ee0c22c2c1ac9dd470506b26c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:12.376305 kubelet[2673]: E1027 23:40:12.376275 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa092658acf0f3072cd3a6f33c7a8f1b6a51d5ee0c22c2c1ac9dd470506b26c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b6759447d-6lmw6" Oct 27 23:40:12.376336 kubelet[2673]: E1027 23:40:12.376303 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa092658acf0f3072cd3a6f33c7a8f1b6a51d5ee0c22c2c1ac9dd470506b26c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b6759447d-6lmw6" Oct 27 23:40:12.376364 kubelet[2673]: E1027 23:40:12.376341 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b6759447d-6lmw6_calico-system(f28f980f-552d-4708-ba17-813aa6dc44ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b6759447d-6lmw6_calico-system(f28f980f-552d-4708-ba17-813aa6dc44ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa092658acf0f3072cd3a6f33c7a8f1b6a51d5ee0c22c2c1ac9dd470506b26c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b6759447d-6lmw6" podUID="f28f980f-552d-4708-ba17-813aa6dc44ab" Oct 27 23:40:13.060511 systemd[1]: Created slice kubepods-besteffort-podc6363763_fd3b_49e5_96bd_c0e1b8f05225.slice - libcontainer container kubepods-besteffort-podc6363763_fd3b_49e5_96bd_c0e1b8f05225.slice. Oct 27 23:40:13.064786 containerd[1544]: time="2025-10-27T23:40:13.064537625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shccv,Uid:c6363763-fd3b-49e5-96bd-c0e1b8f05225,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:13.108208 systemd[1]: run-netns-cni\x2d11df0ccb\x2d2cf6\x2d8bef\x2d7c97\x2d1a7e6b2a821d.mount: Deactivated successfully. Oct 27 23:40:13.108313 systemd[1]: run-netns-cni\x2dc7c5389c\x2df766\x2d7e65\x2d60e2\x2dbee31c7a12a0.mount: Deactivated successfully. Oct 27 23:40:13.108367 systemd[1]: run-netns-cni\x2d199fe5cf\x2d72ec\x2dc848\x2d2f4e\x2dabb5fd0e96ed.mount: Deactivated successfully. Oct 27 23:40:13.108410 systemd[1]: run-netns-cni\x2d9112a708\x2d4eb8\x2d4415\x2d21d6\x2db5f7a1ab2907.mount: Deactivated successfully. Oct 27 23:40:13.113524 containerd[1544]: time="2025-10-27T23:40:13.113458057Z" level=error msg="Failed to destroy network for sandbox \"4575fa3641c4b7204acc162158e0b28303df886d121b056f2f98176defca6ab7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:13.115260 systemd[1]: run-netns-cni\x2d1b35cd41\x2dbe4f\x2d0412\x2d9a29\x2d60348f19ad90.mount: Deactivated successfully. Oct 27 23:40:13.118617 containerd[1544]: time="2025-10-27T23:40:13.118537380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shccv,Uid:c6363763-fd3b-49e5-96bd-c0e1b8f05225,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4575fa3641c4b7204acc162158e0b28303df886d121b056f2f98176defca6ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:13.118885 kubelet[2673]: E1027 23:40:13.118754 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4575fa3641c4b7204acc162158e0b28303df886d121b056f2f98176defca6ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 23:40:13.118885 kubelet[2673]: E1027 23:40:13.118851 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4575fa3641c4b7204acc162158e0b28303df886d121b056f2f98176defca6ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:13.118885 kubelet[2673]: E1027 23:40:13.118873 2673 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4575fa3641c4b7204acc162158e0b28303df886d121b056f2f98176defca6ab7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-shccv" Oct 27 23:40:13.120669 kubelet[2673]: E1027 23:40:13.118918 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4575fa3641c4b7204acc162158e0b28303df886d121b056f2f98176defca6ab7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:16.094984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699309086.mount: Deactivated successfully. Oct 27 23:40:16.355541 containerd[1544]: time="2025-10-27T23:40:16.355092283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:16.355541 containerd[1544]: time="2025-10-27T23:40:16.355427563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 27 23:40:16.366805 containerd[1544]: time="2025-10-27T23:40:16.366038008Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:16.368690 containerd[1544]: time="2025-10-27T23:40:16.368643770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:40:16.369660 containerd[1544]: time="2025-10-27T23:40:16.369620570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.227902555s" Oct 27 23:40:16.369805 containerd[1544]: time="2025-10-27T23:40:16.369785051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 27 23:40:16.381430 containerd[1544]: time="2025-10-27T23:40:16.381388617Z" level=info msg="CreateContainer within sandbox \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 23:40:16.395232 containerd[1544]: time="2025-10-27T23:40:16.395184704Z" level=info msg="Container bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:40:16.406569 containerd[1544]: time="2025-10-27T23:40:16.406508911Z" level=info msg="CreateContainer within sandbox \"a3c885f013d47754722f77e9493530479e054db25c4739e6e654397bb38900f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4\"" Oct 27 23:40:16.407079 containerd[1544]: time="2025-10-27T23:40:16.407054751Z" level=info msg="StartContainer for \"bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4\"" Oct 27 23:40:16.408679 containerd[1544]: time="2025-10-27T23:40:16.408652112Z" level=info msg="connecting to shim bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4" address="unix:///run/containerd/s/443fba344cb2dc4f0bdfd1c20022a54b9028f12a6918c787736d00b70b213c02" protocol=ttrpc version=3 Oct 27 23:40:16.427018 systemd[1]: Started cri-containerd-bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4.scope - libcontainer container bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4. Oct 27 23:40:16.469527 containerd[1544]: time="2025-10-27T23:40:16.469418585Z" level=info msg="StartContainer for \"bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4\" returns successfully" Oct 27 23:40:16.596032 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 23:40:16.596164 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 23:40:16.773509 kubelet[2673]: I1027 23:40:16.773043 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-ca-bundle\") pod \"f28f980f-552d-4708-ba17-813aa6dc44ab\" (UID: \"f28f980f-552d-4708-ba17-813aa6dc44ab\") " Oct 27 23:40:16.773509 kubelet[2673]: I1027 23:40:16.773109 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjd4k\" (UniqueName: \"kubernetes.io/projected/f28f980f-552d-4708-ba17-813aa6dc44ab-kube-api-access-xjd4k\") pod \"f28f980f-552d-4708-ba17-813aa6dc44ab\" (UID: \"f28f980f-552d-4708-ba17-813aa6dc44ab\") " Oct 27 23:40:16.773509 kubelet[2673]: I1027 23:40:16.773135 2673 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-backend-key-pair\") pod \"f28f980f-552d-4708-ba17-813aa6dc44ab\" (UID: \"f28f980f-552d-4708-ba17-813aa6dc44ab\") " Oct 27 23:40:16.775723 kubelet[2673]: I1027 23:40:16.775569 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f28f980f-552d-4708-ba17-813aa6dc44ab" (UID: "f28f980f-552d-4708-ba17-813aa6dc44ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 23:40:16.779455 kubelet[2673]: I1027 23:40:16.779364 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28f980f-552d-4708-ba17-813aa6dc44ab-kube-api-access-xjd4k" (OuterVolumeSpecName: "kube-api-access-xjd4k") pod "f28f980f-552d-4708-ba17-813aa6dc44ab" (UID: "f28f980f-552d-4708-ba17-813aa6dc44ab"). InnerVolumeSpecName "kube-api-access-xjd4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:40:16.779678 kubelet[2673]: I1027 23:40:16.779654 2673 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f28f980f-552d-4708-ba17-813aa6dc44ab" (UID: "f28f980f-552d-4708-ba17-813aa6dc44ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 23:40:16.874120 kubelet[2673]: I1027 23:40:16.874034 2673 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 23:40:16.874120 kubelet[2673]: I1027 23:40:16.874072 2673 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjd4k\" (UniqueName: \"kubernetes.io/projected/f28f980f-552d-4708-ba17-813aa6dc44ab-kube-api-access-xjd4k\") on node \"localhost\" DevicePath \"\"" Oct 27 23:40:16.874120 kubelet[2673]: I1027 23:40:16.874082 2673 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f28f980f-552d-4708-ba17-813aa6dc44ab-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 23:40:17.094462 systemd[1]: var-lib-kubelet-pods-f28f980f\x2d552d\x2d4708\x2dba17\x2d813aa6dc44ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjd4k.mount: Deactivated successfully. Oct 27 23:40:17.094563 systemd[1]: var-lib-kubelet-pods-f28f980f\x2d552d\x2d4708\x2dba17\x2d813aa6dc44ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 23:40:17.159244 kubelet[2673]: E1027 23:40:17.159214 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:17.165910 systemd[1]: Removed slice kubepods-besteffort-podf28f980f_552d_4708_ba17_813aa6dc44ab.slice - libcontainer container kubepods-besteffort-podf28f980f_552d_4708_ba17_813aa6dc44ab.slice. Oct 27 23:40:17.178027 kubelet[2673]: I1027 23:40:17.177961 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qmt97" podStartSLOduration=1.2518857620000001 podStartE2EDuration="14.177942326s" podCreationTimestamp="2025-10-27 23:40:03 +0000 UTC" firstStartedPulling="2025-10-27 23:40:03.444479007 +0000 UTC m=+21.495102030" lastFinishedPulling="2025-10-27 23:40:16.370535571 +0000 UTC m=+34.421158594" observedRunningTime="2025-10-27 23:40:17.177627126 +0000 UTC m=+35.228250149" watchObservedRunningTime="2025-10-27 23:40:17.177942326 +0000 UTC m=+35.228565349" Oct 27 23:40:17.240969 systemd[1]: Created slice kubepods-besteffort-pod5a064a63_743b_4222_a219_9a1bbd1a466a.slice - libcontainer container kubepods-besteffort-pod5a064a63_743b_4222_a219_9a1bbd1a466a.slice. Oct 27 23:40:17.277355 kubelet[2673]: I1027 23:40:17.277283 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5a064a63-743b-4222-a219-9a1bbd1a466a-whisker-backend-key-pair\") pod \"whisker-dd8bcbbfb-wngl2\" (UID: \"5a064a63-743b-4222-a219-9a1bbd1a466a\") " pod="calico-system/whisker-dd8bcbbfb-wngl2" Oct 27 23:40:17.277355 kubelet[2673]: I1027 23:40:17.277336 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a064a63-743b-4222-a219-9a1bbd1a466a-whisker-ca-bundle\") pod \"whisker-dd8bcbbfb-wngl2\" (UID: \"5a064a63-743b-4222-a219-9a1bbd1a466a\") " pod="calico-system/whisker-dd8bcbbfb-wngl2" Oct 27 23:40:17.277355 kubelet[2673]: I1027 23:40:17.277354 2673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9d5s\" (UniqueName: \"kubernetes.io/projected/5a064a63-743b-4222-a219-9a1bbd1a466a-kube-api-access-c9d5s\") pod \"whisker-dd8bcbbfb-wngl2\" (UID: \"5a064a63-743b-4222-a219-9a1bbd1a466a\") " pod="calico-system/whisker-dd8bcbbfb-wngl2" Oct 27 23:40:17.544335 containerd[1544]: time="2025-10-27T23:40:17.544276353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd8bcbbfb-wngl2,Uid:5a064a63-743b-4222-a219-9a1bbd1a466a,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:17.735201 systemd-networkd[1458]: cali67219270358: Link UP Oct 27 23:40:17.735837 systemd-networkd[1458]: cali67219270358: Gained carrier Oct 27 23:40:17.750420 containerd[1544]: 2025-10-27 23:40:17.570 [INFO][3784] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 23:40:17.750420 containerd[1544]: 2025-10-27 23:40:17.602 [INFO][3784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0 whisker-dd8bcbbfb- calico-system 5a064a63-743b-4222-a219-9a1bbd1a466a 904 0 2025-10-27 23:40:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dd8bcbbfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-dd8bcbbfb-wngl2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali67219270358 [] [] }} ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-" Oct 27 23:40:17.750420 containerd[1544]: 2025-10-27 23:40:17.603 [INFO][3784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.750420 containerd[1544]: 2025-10-27 23:40:17.665 [INFO][3799] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" HandleID="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Workload="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.665 [INFO][3799] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" HandleID="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Workload="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039be80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-dd8bcbbfb-wngl2", "timestamp":"2025-10-27 23:40:17.665595896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.665 [INFO][3799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.665 [INFO][3799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.666 [INFO][3799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.676 [INFO][3799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" host="localhost" Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.683 [INFO][3799] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.688 [INFO][3799] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.690 [INFO][3799] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.693 [INFO][3799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:17.750984 containerd[1544]: 2025-10-27 23:40:17.693 [INFO][3799] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" host="localhost" Oct 27 23:40:17.751179 containerd[1544]: 2025-10-27 23:40:17.695 [INFO][3799] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8 Oct 27 23:40:17.751179 containerd[1544]: 2025-10-27 23:40:17.704 [INFO][3799] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" host="localhost" Oct 27 23:40:17.751179 containerd[1544]: 2025-10-27 23:40:17.723 [INFO][3799] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" host="localhost" Oct 27 23:40:17.751179 containerd[1544]: 2025-10-27 23:40:17.723 [INFO][3799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" host="localhost" Oct 27 23:40:17.751179 containerd[1544]: 2025-10-27 23:40:17.723 [INFO][3799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:17.751179 containerd[1544]: 2025-10-27 23:40:17.723 [INFO][3799] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" HandleID="k8s-pod-network.292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Workload="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.751300 containerd[1544]: 2025-10-27 23:40:17.726 [INFO][3784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0", GenerateName:"whisker-dd8bcbbfb-", Namespace:"calico-system", SelfLink:"", UID:"5a064a63-743b-4222-a219-9a1bbd1a466a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dd8bcbbfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-dd8bcbbfb-wngl2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali67219270358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:17.751300 containerd[1544]: 2025-10-27 23:40:17.727 [INFO][3784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.751370 containerd[1544]: 2025-10-27 23:40:17.727 [INFO][3784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67219270358 ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.751370 containerd[1544]: 2025-10-27 23:40:17.736 [INFO][3784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.751407 containerd[1544]: 2025-10-27 23:40:17.736 [INFO][3784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0", GenerateName:"whisker-dd8bcbbfb-", Namespace:"calico-system", SelfLink:"", UID:"5a064a63-743b-4222-a219-9a1bbd1a466a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dd8bcbbfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8", Pod:"whisker-dd8bcbbfb-wngl2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali67219270358", MAC:"e6:c2:a8:4e:5e:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:17.751452 containerd[1544]: 2025-10-27 23:40:17.747 [INFO][3784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" Namespace="calico-system" Pod="whisker-dd8bcbbfb-wngl2" WorkloadEndpoint="localhost-k8s-whisker--dd8bcbbfb--wngl2-eth0" Oct 27 23:40:17.793850 containerd[1544]: time="2025-10-27T23:40:17.793304361Z" level=info msg="connecting to shim 292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8" address="unix:///run/containerd/s/98771dab4709653aab5603266bba914fe3895706c02887fe2020b3d4568fb15e" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:17.819972 systemd[1]: Started cri-containerd-292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8.scope - libcontainer container 292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8. Oct 27 23:40:17.833062 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:17.854320 containerd[1544]: time="2025-10-27T23:40:17.854250112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd8bcbbfb-wngl2,Uid:5a064a63-743b-4222-a219-9a1bbd1a466a,Namespace:calico-system,Attempt:0,} returns sandbox id \"292fd07ec9500358a1a5b00d579fd5babca72661175dac9067ecbba7eab297b8\"" Oct 27 23:40:17.858872 containerd[1544]: time="2025-10-27T23:40:17.858828635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 23:40:18.054541 kubelet[2673]: I1027 23:40:18.054340 2673 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28f980f-552d-4708-ba17-813aa6dc44ab" path="/var/lib/kubelet/pods/f28f980f-552d-4708-ba17-813aa6dc44ab/volumes" Oct 27 23:40:18.056236 containerd[1544]: time="2025-10-27T23:40:18.056059774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:18.132872 containerd[1544]: time="2025-10-27T23:40:18.132572250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 23:40:18.133284 containerd[1544]: time="2025-10-27T23:40:18.132601890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 23:40:18.133633 kubelet[2673]: E1027 23:40:18.133541 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 23:40:18.133633 kubelet[2673]: E1027 23:40:18.133611 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 23:40:18.136458 kubelet[2673]: E1027 23:40:18.136392 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e82d8f682ab4362a5352766130d2560,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9d5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dd8bcbbfb-wngl2_calico-system(5a064a63-743b-4222-a219-9a1bbd1a466a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:18.139236 containerd[1544]: time="2025-10-27T23:40:18.139194454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 23:40:18.163328 kubelet[2673]: E1027 23:40:18.163209 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:18.322156 containerd[1544]: time="2025-10-27T23:40:18.322115981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4\" id:\"99b944909cfdbcb866a191db607de5ccf6aca9ba951cfcbdc1805a8d833ffbad\" pid:3999 exit_status:1 exited_at:{seconds:1761608418 nanos:321458661}" Oct 27 23:40:18.354416 containerd[1544]: time="2025-10-27T23:40:18.354376717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:18.355411 containerd[1544]: time="2025-10-27T23:40:18.355320797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 23:40:18.355411 containerd[1544]: time="2025-10-27T23:40:18.355380317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 23:40:18.355749 kubelet[2673]: E1027 23:40:18.355705 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 23:40:18.355983 kubelet[2673]: E1027 23:40:18.355868 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 23:40:18.356597 kubelet[2673]: E1027 23:40:18.356411 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9d5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dd8bcbbfb-wngl2_calico-system(5a064a63-743b-4222-a219-9a1bbd1a466a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:18.357753 kubelet[2673]: E1027 23:40:18.357697 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd8bcbbfb-wngl2" podUID="5a064a63-743b-4222-a219-9a1bbd1a466a" Oct 27 23:40:18.374006 systemd-networkd[1458]: vxlan.calico: Link UP Oct 27 23:40:18.374012 systemd-networkd[1458]: vxlan.calico: Gained carrier Oct 27 23:40:19.099033 systemd-networkd[1458]: cali67219270358: Gained IPv6LL Oct 27 23:40:19.164949 kubelet[2673]: E1027 23:40:19.164808 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:19.166818 kubelet[2673]: E1027 23:40:19.166412 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd8bcbbfb-wngl2" podUID="5a064a63-743b-4222-a219-9a1bbd1a466a" Oct 27 23:40:19.246144 containerd[1544]: time="2025-10-27T23:40:19.246071738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4\" id:\"8e9130a155e1fdaced3af3ced164e03fddd9dee45f6d53dd3e5b793dfb676cf8\" pid:4099 exit_status:1 exited_at:{seconds:1761608419 nanos:245746857}" Oct 27 23:40:20.250956 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL Oct 27 23:40:23.017308 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:51592.service - OpenSSH per-connection server daemon (10.0.0.1:51592). Oct 27 23:40:23.051287 containerd[1544]: time="2025-10-27T23:40:23.051238323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9qxlt,Uid:eca582f5-cc8d-425a-955f-92ba936703d3,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:23.089486 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 51592 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:23.091996 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:23.104440 systemd-logind[1526]: New session 8 of user core. Oct 27 23:40:23.110975 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 23:40:23.192812 systemd-networkd[1458]: cali40b8bcc6f7d: Link UP Oct 27 23:40:23.193949 systemd-networkd[1458]: cali40b8bcc6f7d: Gained carrier Oct 27 23:40:23.219243 containerd[1544]: 2025-10-27 23:40:23.095 [INFO][4123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--9qxlt-eth0 goldmane-666569f655- calico-system eca582f5-cc8d-425a-955f-92ba936703d3 837 0 2025-10-27 23:40:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-9qxlt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40b8bcc6f7d [] [] }} ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-" Oct 27 23:40:23.219243 containerd[1544]: 2025-10-27 23:40:23.095 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.219243 containerd[1544]: 2025-10-27 23:40:23.137 [INFO][4138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" HandleID="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Workload="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.137 [INFO][4138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" HandleID="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Workload="localhost-k8s-goldmane--666569f655--9qxlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c35e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-9qxlt", "timestamp":"2025-10-27 23:40:23.137428273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.137 [INFO][4138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.137 [INFO][4138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.137 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.147 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" host="localhost" Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.155 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.162 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.165 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.167 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:23.219557 containerd[1544]: 2025-10-27 23:40:23.167 [INFO][4138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" host="localhost" Oct 27 23:40:23.220199 containerd[1544]: 2025-10-27 23:40:23.169 [INFO][4138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce Oct 27 23:40:23.220199 containerd[1544]: 2025-10-27 23:40:23.174 [INFO][4138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" host="localhost" Oct 27 23:40:23.220199 containerd[1544]: 2025-10-27 23:40:23.180 [INFO][4138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" host="localhost" Oct 27 23:40:23.220199 containerd[1544]: 2025-10-27 23:40:23.180 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" host="localhost" Oct 27 23:40:23.220199 containerd[1544]: 2025-10-27 23:40:23.180 [INFO][4138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:23.220199 containerd[1544]: 2025-10-27 23:40:23.180 [INFO][4138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" HandleID="k8s-pod-network.dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Workload="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.220645 containerd[1544]: 2025-10-27 23:40:23.190 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9qxlt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"eca582f5-cc8d-425a-955f-92ba936703d3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-9qxlt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40b8bcc6f7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:23.220645 containerd[1544]: 2025-10-27 23:40:23.190 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.220906 containerd[1544]: 2025-10-27 23:40:23.190 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40b8bcc6f7d ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.220906 containerd[1544]: 2025-10-27 23:40:23.194 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.221052 containerd[1544]: 2025-10-27 23:40:23.194 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9qxlt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"eca582f5-cc8d-425a-955f-92ba936703d3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce", Pod:"goldmane-666569f655-9qxlt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40b8bcc6f7d", MAC:"86:2a:cc:18:44:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:23.221213 containerd[1544]: 2025-10-27 23:40:23.212 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" Namespace="calico-system" Pod="goldmane-666569f655-9qxlt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9qxlt-eth0" Oct 27 23:40:23.257306 containerd[1544]: time="2025-10-27T23:40:23.257257835Z" level=info msg="connecting to shim dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce" address="unix:///run/containerd/s/892daaca220833bf3bdb295f7c6d70714f9e4518733b44d2b07541a11060df43" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:23.284020 systemd[1]: Started cri-containerd-dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce.scope - libcontainer container dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce. Oct 27 23:40:23.294443 sshd[4143]: Connection closed by 10.0.0.1 port 51592 Oct 27 23:40:23.294305 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:23.299571 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:51592.service: Deactivated successfully. Oct 27 23:40:23.301620 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 23:40:23.302530 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Oct 27 23:40:23.305658 systemd-logind[1526]: Removed session 8. Oct 27 23:40:23.307357 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:23.350026 containerd[1544]: time="2025-10-27T23:40:23.349970387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9qxlt,Uid:eca582f5-cc8d-425a-955f-92ba936703d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc8cd71fc3d87eeafe6761fba320f6eb980391e1ae5b7c06bd83b08b12f786ce\"" Oct 27 23:40:23.355958 containerd[1544]: time="2025-10-27T23:40:23.355922349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 23:40:23.560101 containerd[1544]: time="2025-10-27T23:40:23.559994300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:23.560906 containerd[1544]: time="2025-10-27T23:40:23.560868620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 23:40:23.560962 containerd[1544]: time="2025-10-27T23:40:23.560911340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:23.561202 kubelet[2673]: E1027 23:40:23.561159 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 23:40:23.561903 kubelet[2673]: E1027 23:40:23.561217 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 23:40:23.561903 kubelet[2673]: E1027 23:40:23.561370 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgbnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9qxlt_calico-system(eca582f5-cc8d-425a-955f-92ba936703d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:23.562820 kubelet[2673]: E1027 23:40:23.562756 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:40:24.050783 kubelet[2673]: E1027 23:40:24.050747 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:24.051191 containerd[1544]: time="2025-10-27T23:40:24.051145070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lv66f,Uid:8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27,Namespace:kube-system,Attempt:0,}" Oct 27 23:40:24.169805 systemd-networkd[1458]: calief398693435: Link UP Oct 27 23:40:24.170971 systemd-networkd[1458]: calief398693435: Gained carrier Oct 27 23:40:24.187402 kubelet[2673]: E1027 23:40:24.187217 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:40:24.199264 containerd[1544]: 2025-10-27 23:40:24.098 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lv66f-eth0 coredns-668d6bf9bc- kube-system 8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27 835 0 2025-10-27 23:39:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lv66f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief398693435 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-" Oct 27 23:40:24.199264 containerd[1544]: 2025-10-27 23:40:24.098 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.199264 containerd[1544]: 2025-10-27 23:40:24.126 [INFO][4245] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" HandleID="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Workload="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.127 [INFO][4245] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" HandleID="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Workload="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lv66f", "timestamp":"2025-10-27 23:40:24.126979494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.127 [INFO][4245] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.127 [INFO][4245] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.127 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.137 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" host="localhost" Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.142 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.147 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.149 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.151 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:24.200516 containerd[1544]: 2025-10-27 23:40:24.151 [INFO][4245] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" host="localhost" Oct 27 23:40:24.200907 containerd[1544]: 2025-10-27 23:40:24.153 [INFO][4245] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab Oct 27 23:40:24.200907 containerd[1544]: 2025-10-27 23:40:24.158 [INFO][4245] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" host="localhost" Oct 27 23:40:24.200907 containerd[1544]: 2025-10-27 23:40:24.165 [INFO][4245] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" host="localhost" Oct 27 23:40:24.200907 containerd[1544]: 2025-10-27 23:40:24.165 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" host="localhost" Oct 27 23:40:24.200907 containerd[1544]: 2025-10-27 23:40:24.165 [INFO][4245] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:24.200907 containerd[1544]: 2025-10-27 23:40:24.165 [INFO][4245] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" HandleID="k8s-pod-network.beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Workload="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.201545 containerd[1544]: 2025-10-27 23:40:24.167 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lv66f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lv66f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief398693435", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:24.202362 containerd[1544]: 2025-10-27 23:40:24.167 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.202362 containerd[1544]: 2025-10-27 23:40:24.167 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief398693435 ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.202362 containerd[1544]: 2025-10-27 23:40:24.171 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.202443 containerd[1544]: 2025-10-27 23:40:24.172 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lv66f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab", Pod:"coredns-668d6bf9bc-lv66f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief398693435", MAC:"f2:70:47:5a:59:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:24.202443 containerd[1544]: 2025-10-27 23:40:24.187 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" Namespace="kube-system" Pod="coredns-668d6bf9bc-lv66f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lv66f-eth0" Oct 27 23:40:24.242460 containerd[1544]: time="2025-10-27T23:40:24.242409652Z" level=info msg="connecting to shim beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab" address="unix:///run/containerd/s/20810a78810b30ae9a452442282addc6a24b350cdd2d4f1c0a5dde69159fe9e4" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:24.282993 systemd[1]: Started cri-containerd-beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab.scope - libcontainer container beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab. Oct 27 23:40:24.295841 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:24.322237 containerd[1544]: time="2025-10-27T23:40:24.322122158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lv66f,Uid:8cfb3dc7-ab65-4eb3-9eff-4a740cb4ca27,Namespace:kube-system,Attempt:0,} returns sandbox id \"beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab\"" Oct 27 23:40:24.324073 kubelet[2673]: E1027 23:40:24.324047 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:24.339625 containerd[1544]: time="2025-10-27T23:40:24.339581564Z" level=info msg="CreateContainer within sandbox \"beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 23:40:24.352899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433966962.mount: Deactivated successfully. Oct 27 23:40:24.354748 containerd[1544]: time="2025-10-27T23:40:24.354073968Z" level=info msg="Container 8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:40:24.363559 containerd[1544]: time="2025-10-27T23:40:24.363516291Z" level=info msg="CreateContainer within sandbox \"beb5ac58b5c1ff494ed463b13928af772cd6150f4bd6d54eb5582e4cc6567aab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4\"" Oct 27 23:40:24.364532 containerd[1544]: time="2025-10-27T23:40:24.364503372Z" level=info msg="StartContainer for \"8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4\"" Oct 27 23:40:24.365475 containerd[1544]: time="2025-10-27T23:40:24.365447892Z" level=info msg="connecting to shim 8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4" address="unix:///run/containerd/s/20810a78810b30ae9a452442282addc6a24b350cdd2d4f1c0a5dde69159fe9e4" protocol=ttrpc version=3 Oct 27 23:40:24.404981 systemd[1]: Started cri-containerd-8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4.scope - libcontainer container 8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4. Oct 27 23:40:24.434846 containerd[1544]: time="2025-10-27T23:40:24.434650155Z" level=info msg="StartContainer for \"8fdcd9e352a2045bcb99d71d29090916e6106ae973c1de4759fd83b400f7cfc4\" returns successfully" Oct 27 23:40:24.538925 systemd-networkd[1458]: cali40b8bcc6f7d: Gained IPv6LL Oct 27 23:40:25.050907 containerd[1544]: time="2025-10-27T23:40:25.050853514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c698d47cd-h2pk7,Uid:6f5865ca-3aea-44fa-9144-072fef5dde02,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:25.176460 systemd-networkd[1458]: cali937e7d64b09: Link UP Oct 27 23:40:25.177534 systemd-networkd[1458]: cali937e7d64b09: Gained carrier Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.094 [INFO][4349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0 calico-kube-controllers-c698d47cd- calico-system 6f5865ca-3aea-44fa-9144-072fef5dde02 841 0 2025-10-27 23:40:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c698d47cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c698d47cd-h2pk7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali937e7d64b09 [] [] }} ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.094 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.127 [INFO][4363] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" HandleID="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Workload="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.127 [INFO][4363] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" HandleID="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Workload="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c698d47cd-h2pk7", "timestamp":"2025-10-27 23:40:25.127423698 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.127 [INFO][4363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.127 [INFO][4363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.127 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.137 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.144 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.151 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.153 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.155 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.155 [INFO][4363] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.157 [INFO][4363] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350 Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.164 [INFO][4363] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.170 [INFO][4363] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.170 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" host="localhost" Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.170 [INFO][4363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:25.198381 containerd[1544]: 2025-10-27 23:40:25.170 [INFO][4363] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" HandleID="k8s-pod-network.b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Workload="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.199037 containerd[1544]: 2025-10-27 23:40:25.172 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0", GenerateName:"calico-kube-controllers-c698d47cd-", Namespace:"calico-system", SelfLink:"", UID:"6f5865ca-3aea-44fa-9144-072fef5dde02", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c698d47cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c698d47cd-h2pk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali937e7d64b09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:25.199037 containerd[1544]: 2025-10-27 23:40:25.173 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.199037 containerd[1544]: 2025-10-27 23:40:25.173 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali937e7d64b09 ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.199037 containerd[1544]: 2025-10-27 23:40:25.178 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.199037 containerd[1544]: 2025-10-27 23:40:25.179 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0", GenerateName:"calico-kube-controllers-c698d47cd-", Namespace:"calico-system", SelfLink:"", UID:"6f5865ca-3aea-44fa-9144-072fef5dde02", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c698d47cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350", Pod:"calico-kube-controllers-c698d47cd-h2pk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali937e7d64b09", MAC:"9a:45:d4:95:e1:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:25.199037 containerd[1544]: 2025-10-27 23:40:25.191 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" Namespace="calico-system" Pod="calico-kube-controllers-c698d47cd-h2pk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c698d47cd--h2pk7-eth0" Oct 27 23:40:25.208511 kubelet[2673]: E1027 23:40:25.207683 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:25.209984 kubelet[2673]: E1027 23:40:25.209898 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:40:25.237761 kubelet[2673]: I1027 23:40:25.237558 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lv66f" podStartSLOduration=38.237540331 podStartE2EDuration="38.237540331s" podCreationTimestamp="2025-10-27 23:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:40:25.237016531 +0000 UTC m=+43.287639554" watchObservedRunningTime="2025-10-27 23:40:25.237540331 +0000 UTC m=+43.288163354" Oct 27 23:40:25.242155 containerd[1544]: time="2025-10-27T23:40:25.242025613Z" level=info msg="connecting to shim b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350" address="unix:///run/containerd/s/66d563dd2b99b4e198adfbe03ac2cfe523e0e62965a4ad7e75527b22f44c01cd" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:25.266983 systemd[1]: Started cri-containerd-b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350.scope - libcontainer container b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350. Oct 27 23:40:25.283849 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:25.306358 containerd[1544]: time="2025-10-27T23:40:25.306060752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c698d47cd-h2pk7,Uid:6f5865ca-3aea-44fa-9144-072fef5dde02,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4cfd5fdac0c9a409508507bd407e70e0fafa54f65aea00b119a6301bb76f350\"" Oct 27 23:40:25.308237 containerd[1544]: time="2025-10-27T23:40:25.307889033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 23:40:25.506547 containerd[1544]: time="2025-10-27T23:40:25.506444414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:25.507327 containerd[1544]: time="2025-10-27T23:40:25.507286654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 23:40:25.507607 kubelet[2673]: E1027 23:40:25.507552 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 23:40:25.507676 kubelet[2673]: E1027 23:40:25.507623 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 23:40:25.507842 kubelet[2673]: E1027 23:40:25.507795 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w2bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c698d47cd-h2pk7_calico-system(6f5865ca-3aea-44fa-9144-072fef5dde02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:25.509042 kubelet[2673]: E1027 23:40:25.508982 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:40:25.514976 containerd[1544]: time="2025-10-27T23:40:25.507311494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 23:40:25.691003 systemd-networkd[1458]: calief398693435: Gained IPv6LL Oct 27 23:40:26.210573 kubelet[2673]: E1027 23:40:26.210501 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:26.213262 kubelet[2673]: E1027 23:40:26.211322 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:40:27.034916 systemd-networkd[1458]: cali937e7d64b09: Gained IPv6LL Oct 27 23:40:27.050799 kubelet[2673]: E1027 23:40:27.050675 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:27.051739 containerd[1544]: time="2025-10-27T23:40:27.051184225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxn6f,Uid:95482e7a-9a23-4b52-8975-1de0f3e95885,Namespace:kube-system,Attempt:0,}" Oct 27 23:40:27.051739 containerd[1544]: time="2025-10-27T23:40:27.051343865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-w94zx,Uid:8e5a6c05-e53d-438e-b01a-ad6295f7d8ed,Namespace:calico-apiserver,Attempt:0,}" Oct 27 23:40:27.052152 containerd[1544]: time="2025-10-27T23:40:27.051957585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-fcbh6,Uid:7eea9394-537e-498e-8ee0-ada3b969c833,Namespace:calico-apiserver,Attempt:0,}" Oct 27 23:40:27.216203 kubelet[2673]: E1027 23:40:27.216115 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:27.217570 kubelet[2673]: E1027 23:40:27.217474 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:40:27.301635 systemd-networkd[1458]: cali38ddad7e075: Link UP Oct 27 23:40:27.302277 systemd-networkd[1458]: cali38ddad7e075: Gained carrier Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.127 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0 calico-apiserver-6f59658cf9- calico-apiserver 8e5a6c05-e53d-438e-b01a-ad6295f7d8ed 842 0 2025-10-27 23:39:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f59658cf9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f59658cf9-w94zx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38ddad7e075 [] [] }} ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.128 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" HandleID="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Workload="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" HandleID="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Workload="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012f5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f59658cf9-w94zx", "timestamp":"2025-10-27 23:40:27.166192096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.184 [INFO][4473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.196 [INFO][4473] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.203 [INFO][4473] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.206 [INFO][4473] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.208 [INFO][4473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.208 [INFO][4473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.210 [INFO][4473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2 Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.273 [INFO][4473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.293 [INFO][4473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.293 [INFO][4473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" host="localhost" Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.293 [INFO][4473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:27.325075 containerd[1544]: 2025-10-27 23:40:27.293 [INFO][4473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" HandleID="k8s-pod-network.dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Workload="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.325653 containerd[1544]: 2025-10-27 23:40:27.299 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0", GenerateName:"calico-apiserver-6f59658cf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e5a6c05-e53d-438e-b01a-ad6295f7d8ed", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f59658cf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f59658cf9-w94zx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38ddad7e075", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:27.325653 containerd[1544]: 2025-10-27 23:40:27.299 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.325653 containerd[1544]: 2025-10-27 23:40:27.299 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38ddad7e075 ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.325653 containerd[1544]: 2025-10-27 23:40:27.302 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.325653 containerd[1544]: 2025-10-27 23:40:27.303 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0", GenerateName:"calico-apiserver-6f59658cf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e5a6c05-e53d-438e-b01a-ad6295f7d8ed", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f59658cf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2", Pod:"calico-apiserver-6f59658cf9-w94zx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38ddad7e075", MAC:"3a:8f:2a:8c:ff:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:27.325653 containerd[1544]: 2025-10-27 23:40:27.320 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-w94zx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--w94zx-eth0" Oct 27 23:40:27.352258 containerd[1544]: time="2025-10-27T23:40:27.352172265Z" level=info msg="connecting to shim dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2" address="unix:///run/containerd/s/6c963baa3c882275c0299419dd0614a90b519a006ab69d8a614aac60a07f1f3d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:27.358208 systemd-networkd[1458]: calicbf4b633503: Link UP Oct 27 23:40:27.359751 systemd-networkd[1458]: calicbf4b633503: Gained carrier Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.128 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0 coredns-668d6bf9bc- kube-system 95482e7a-9a23-4b52-8975-1de0f3e95885 838 0 2025-10-27 23:39:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xxn6f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbf4b633503 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.128 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4475] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" HandleID="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Workload="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4475] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" HandleID="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Workload="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xxn6f", "timestamp":"2025-10-27 23:40:27.166655896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.166 [INFO][4475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.293 [INFO][4475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.294 [INFO][4475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.310 [INFO][4475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.315 [INFO][4475] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.324 [INFO][4475] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.327 [INFO][4475] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.330 [INFO][4475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.330 [INFO][4475] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.332 [INFO][4475] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39 Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.337 [INFO][4475] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.350 [INFO][4475] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.350 [INFO][4475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" host="localhost" Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.350 [INFO][4475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:27.382826 containerd[1544]: 2025-10-27 23:40:27.350 [INFO][4475] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" HandleID="k8s-pod-network.058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Workload="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.383332 containerd[1544]: 2025-10-27 23:40:27.355 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"95482e7a-9a23-4b52-8975-1de0f3e95885", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xxn6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbf4b633503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:27.383332 containerd[1544]: 2025-10-27 23:40:27.355 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.383332 containerd[1544]: 2025-10-27 23:40:27.355 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbf4b633503 ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.383332 containerd[1544]: 2025-10-27 23:40:27.359 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.383332 containerd[1544]: 2025-10-27 23:40:27.360 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"95482e7a-9a23-4b52-8975-1de0f3e95885", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39", Pod:"coredns-668d6bf9bc-xxn6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbf4b633503", MAC:"aa:89:92:7f:f8:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:27.383332 containerd[1544]: 2025-10-27 23:40:27.377 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" Namespace="kube-system" Pod="coredns-668d6bf9bc-xxn6f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xxn6f-eth0" Oct 27 23:40:27.388013 systemd[1]: Started cri-containerd-dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2.scope - libcontainer container dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2. Oct 27 23:40:27.403344 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:27.410573 containerd[1544]: time="2025-10-27T23:40:27.410514401Z" level=info msg="connecting to shim 058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39" address="unix:///run/containerd/s/bcfcd3e941e475501afec7f665087a3dfc4fb19def02cc925ee576a219e8b7dd" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:27.438492 containerd[1544]: time="2025-10-27T23:40:27.438447769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-w94zx,Uid:8e5a6c05-e53d-438e-b01a-ad6295f7d8ed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dbc360b64c27a2c52a627f9664f228f4e477a52cb31bc4e81139dbb20d44a9b2\"" Oct 27 23:40:27.441124 containerd[1544]: time="2025-10-27T23:40:27.440929649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 23:40:27.449037 systemd[1]: Started cri-containerd-058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39.scope - libcontainer container 058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39. Oct 27 23:40:27.468790 systemd-networkd[1458]: cali5a0a2940228: Link UP Oct 27 23:40:27.469282 systemd-networkd[1458]: cali5a0a2940228: Gained carrier Oct 27 23:40:27.477804 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.157 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0 calico-apiserver-6f59658cf9- calico-apiserver 7eea9394-537e-498e-8ee0-ada3b969c833 840 0 2025-10-27 23:39:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f59658cf9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f59658cf9-fcbh6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a0a2940228 [] [] }} ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.158 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.197 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" HandleID="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Workload="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.197 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" HandleID="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Workload="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f59658cf9-fcbh6", "timestamp":"2025-10-27 23:40:27.197321184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.197 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.350 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.350 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.411 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.418 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.428 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.432 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.436 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.436 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.441 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7 Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.451 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.459 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.459 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" host="localhost" Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.460 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:27.489906 containerd[1544]: 2025-10-27 23:40:27.460 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" HandleID="k8s-pod-network.d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Workload="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.490421 containerd[1544]: 2025-10-27 23:40:27.464 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0", GenerateName:"calico-apiserver-6f59658cf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7eea9394-537e-498e-8ee0-ada3b969c833", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f59658cf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f59658cf9-fcbh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a0a2940228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:27.490421 containerd[1544]: 2025-10-27 23:40:27.464 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.490421 containerd[1544]: 2025-10-27 23:40:27.465 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a0a2940228 ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.490421 containerd[1544]: 2025-10-27 23:40:27.469 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.490421 containerd[1544]: 2025-10-27 23:40:27.469 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0", GenerateName:"calico-apiserver-6f59658cf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7eea9394-537e-498e-8ee0-ada3b969c833", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f59658cf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7", Pod:"calico-apiserver-6f59658cf9-fcbh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a0a2940228", MAC:"c2:0e:68:d1:2a:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:27.490421 containerd[1544]: 2025-10-27 23:40:27.486 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" Namespace="calico-apiserver" Pod="calico-apiserver-6f59658cf9-fcbh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f59658cf9--fcbh6-eth0" Oct 27 23:40:27.511830 containerd[1544]: time="2025-10-27T23:40:27.511348948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxn6f,Uid:95482e7a-9a23-4b52-8975-1de0f3e95885,Namespace:kube-system,Attempt:0,} returns sandbox id \"058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39\"" Oct 27 23:40:27.512396 kubelet[2673]: E1027 23:40:27.512363 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:27.515741 containerd[1544]: time="2025-10-27T23:40:27.515679629Z" level=info msg="CreateContainer within sandbox \"058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 23:40:27.521743 containerd[1544]: time="2025-10-27T23:40:27.521698231Z" level=info msg="connecting to shim d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7" address="unix:///run/containerd/s/853c2c4977b419b68862b61f129d7606fc13feb275958ab2e83b1b3732158df2" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:27.524342 containerd[1544]: time="2025-10-27T23:40:27.524292552Z" level=info msg="Container b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604: CDI devices from CRI Config.CDIDevices: []" Oct 27 23:40:27.534760 containerd[1544]: time="2025-10-27T23:40:27.534714274Z" level=info msg="CreateContainer within sandbox \"058159f5a5ee02b758e1601e544886d26d12fbe3216c8ad4d23d8914168b0f39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604\"" Oct 27 23:40:27.536561 containerd[1544]: time="2025-10-27T23:40:27.536523595Z" level=info msg="StartContainer for \"b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604\"" Oct 27 23:40:27.538528 containerd[1544]: time="2025-10-27T23:40:27.538489115Z" level=info msg="connecting to shim b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604" address="unix:///run/containerd/s/bcfcd3e941e475501afec7f665087a3dfc4fb19def02cc925ee576a219e8b7dd" protocol=ttrpc version=3 Oct 27 23:40:27.549994 systemd[1]: Started cri-containerd-d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7.scope - libcontainer container d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7. Oct 27 23:40:27.558928 systemd[1]: Started cri-containerd-b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604.scope - libcontainer container b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604. Oct 27 23:40:27.568446 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:27.597431 containerd[1544]: time="2025-10-27T23:40:27.597381011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f59658cf9-fcbh6,Uid:7eea9394-537e-498e-8ee0-ada3b969c833,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d06b2debca384a88cc2ef981798cb727a5ccd2d1b47607d8456ed1ba73e783c7\"" Oct 27 23:40:27.599583 containerd[1544]: time="2025-10-27T23:40:27.599284332Z" level=info msg="StartContainer for \"b2779b1a1c281ac9020d967464ff8dd2c2a0f5eec18f238680d9cfe3ec51f604\" returns successfully" Oct 27 23:40:27.638129 containerd[1544]: time="2025-10-27T23:40:27.638081462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:27.638937 containerd[1544]: time="2025-10-27T23:40:27.638896862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 23:40:27.638993 containerd[1544]: time="2025-10-27T23:40:27.638955262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:27.639151 kubelet[2673]: E1027 23:40:27.639103 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:27.639197 kubelet[2673]: E1027 23:40:27.639153 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:27.639722 kubelet[2673]: E1027 23:40:27.639371 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t9mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f59658cf9-w94zx_calico-apiserver(8e5a6c05-e53d-438e-b01a-ad6295f7d8ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:27.639930 containerd[1544]: time="2025-10-27T23:40:27.639481663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 23:40:27.640673 kubelet[2673]: E1027 23:40:27.640629 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:40:27.836683 containerd[1544]: time="2025-10-27T23:40:27.836563876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:27.837831 containerd[1544]: time="2025-10-27T23:40:27.837788316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 23:40:27.837989 containerd[1544]: time="2025-10-27T23:40:27.837855876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:27.838046 kubelet[2673]: E1027 23:40:27.838004 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:27.838087 kubelet[2673]: E1027 23:40:27.838059 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:27.838246 kubelet[2673]: E1027 23:40:27.838198 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f59658cf9-fcbh6_calico-apiserver(7eea9394-537e-498e-8ee0-ada3b969c833): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:27.839593 kubelet[2673]: E1027 23:40:27.839538 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:40:28.051678 containerd[1544]: time="2025-10-27T23:40:28.051362452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shccv,Uid:c6363763-fd3b-49e5-96bd-c0e1b8f05225,Namespace:calico-system,Attempt:0,}" Oct 27 23:40:28.163106 systemd-networkd[1458]: calib6a81838a97: Link UP Oct 27 23:40:28.163958 systemd-networkd[1458]: calib6a81838a97: Gained carrier Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.089 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--shccv-eth0 csi-node-driver- calico-system c6363763-fd3b-49e5-96bd-c0e1b8f05225 722 0 2025-10-27 23:40:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-shccv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib6a81838a97 [] [] }} ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.089 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.116 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" HandleID="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Workload="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.116 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" HandleID="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Workload="localhost-k8s-csi--node--driver--shccv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001aedd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-shccv", "timestamp":"2025-10-27 23:40:28.116299949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.116 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.116 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.116 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.127 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.133 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.138 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.140 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.143 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.143 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.145 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6 Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.149 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.156 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.156 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" host="localhost" Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.157 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 23:40:28.179936 containerd[1544]: 2025-10-27 23:40:28.157 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" HandleID="k8s-pod-network.688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Workload="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.180639 containerd[1544]: 2025-10-27 23:40:28.159 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--shccv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6363763-fd3b-49e5-96bd-c0e1b8f05225", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-shccv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6a81838a97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:28.180639 containerd[1544]: 2025-10-27 23:40:28.159 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.180639 containerd[1544]: 2025-10-27 23:40:28.159 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6a81838a97 ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.180639 containerd[1544]: 2025-10-27 23:40:28.164 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.180639 containerd[1544]: 2025-10-27 23:40:28.164 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--shccv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6363763-fd3b-49e5-96bd-c0e1b8f05225", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 23, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6", Pod:"csi-node-driver-shccv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6a81838a97", MAC:"6e:75:fa:af:1b:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 23:40:28.180639 containerd[1544]: 2025-10-27 23:40:28.176 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" Namespace="calico-system" Pod="csi-node-driver-shccv" WorkloadEndpoint="localhost-k8s-csi--node--driver--shccv-eth0" Oct 27 23:40:28.202296 containerd[1544]: time="2025-10-27T23:40:28.202241890Z" level=info msg="connecting to shim 688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6" address="unix:///run/containerd/s/2aa6903361a111ab510b1290290205a75da136acee9b7805628aca7b5744f073" namespace=k8s.io protocol=ttrpc version=3 Oct 27 23:40:28.221004 kubelet[2673]: E1027 23:40:28.220617 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:28.226762 kubelet[2673]: E1027 23:40:28.226719 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:40:28.230066 kubelet[2673]: E1027 23:40:28.230036 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:28.231160 kubelet[2673]: E1027 23:40:28.231128 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:40:28.238458 kubelet[2673]: I1027 23:40:28.238407 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xxn6f" podStartSLOduration=41.238392699 podStartE2EDuration="41.238392699s" podCreationTimestamp="2025-10-27 23:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:40:28.238183339 +0000 UTC m=+46.288806362" watchObservedRunningTime="2025-10-27 23:40:28.238392699 +0000 UTC m=+46.289015722" Oct 27 23:40:28.248806 systemd[1]: Started cri-containerd-688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6.scope - libcontainer container 688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6. Oct 27 23:40:28.274108 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:40:28.311541 containerd[1544]: time="2025-10-27T23:40:28.311394118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shccv,Uid:c6363763-fd3b-49e5-96bd-c0e1b8f05225,Namespace:calico-system,Attempt:0,} returns sandbox id \"688d4913d134c65767692a71e22060ebe543e8d830e575d0d0ca9c8afd28b0c6\"" Oct 27 23:40:28.311923 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:51604.service - OpenSSH per-connection server daemon (10.0.0.1:51604). Oct 27 23:40:28.316640 containerd[1544]: time="2025-10-27T23:40:28.316576719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 23:40:28.378920 systemd-networkd[1458]: cali38ddad7e075: Gained IPv6LL Oct 27 23:40:28.384981 sshd[4778]: Accepted publickey for core from 10.0.0.1 port 51604 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:28.387052 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:28.394700 systemd-logind[1526]: New session 9 of user core. Oct 27 23:40:28.403027 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 23:40:28.442924 systemd-networkd[1458]: calicbf4b633503: Gained IPv6LL Oct 27 23:40:28.519838 containerd[1544]: time="2025-10-27T23:40:28.519766650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:28.520765 containerd[1544]: time="2025-10-27T23:40:28.520722251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 23:40:28.521051 containerd[1544]: time="2025-10-27T23:40:28.520809211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 23:40:28.521830 kubelet[2673]: E1027 23:40:28.521116 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 23:40:28.521830 kubelet[2673]: E1027 23:40:28.521163 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 23:40:28.521830 kubelet[2673]: E1027 23:40:28.521281 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:28.523695 containerd[1544]: time="2025-10-27T23:40:28.523499691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 23:40:28.556341 sshd[4782]: Connection closed by 10.0.0.1 port 51604 Oct 27 23:40:28.557020 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:28.561254 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:51604.service: Deactivated successfully. Oct 27 23:40:28.563058 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 23:40:28.563861 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Oct 27 23:40:28.565158 systemd-logind[1526]: Removed session 9. Oct 27 23:40:28.699040 systemd-networkd[1458]: cali5a0a2940228: Gained IPv6LL Oct 27 23:40:28.725361 containerd[1544]: time="2025-10-27T23:40:28.725297262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:28.726601 containerd[1544]: time="2025-10-27T23:40:28.726534382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 23:40:28.726662 containerd[1544]: time="2025-10-27T23:40:28.726619302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 23:40:28.726871 kubelet[2673]: E1027 23:40:28.726829 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 23:40:28.726925 kubelet[2673]: E1027 23:40:28.726884 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 23:40:28.727041 kubelet[2673]: E1027 23:40:28.727001 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:28.728232 kubelet[2673]: E1027 23:40:28.728174 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:29.233824 kubelet[2673]: E1027 23:40:29.233790 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:29.234583 kubelet[2673]: E1027 23:40:29.234553 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:40:29.235437 kubelet[2673]: E1027 23:40:29.235403 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:40:29.236727 kubelet[2673]: E1027 23:40:29.236617 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:29.466911 systemd-networkd[1458]: calib6a81838a97: Gained IPv6LL Oct 27 23:40:30.236721 kubelet[2673]: E1027 23:40:30.235475 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:30.237465 kubelet[2673]: E1027 23:40:30.237426 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:31.051393 containerd[1544]: time="2025-10-27T23:40:31.051282759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 23:40:31.286490 containerd[1544]: time="2025-10-27T23:40:31.286379248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:31.287549 containerd[1544]: time="2025-10-27T23:40:31.287503648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 23:40:31.287628 containerd[1544]: time="2025-10-27T23:40:31.287608048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 23:40:31.287817 kubelet[2673]: E1027 23:40:31.287762 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 23:40:31.288200 kubelet[2673]: E1027 23:40:31.287831 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 23:40:31.288200 kubelet[2673]: E1027 23:40:31.287955 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e82d8f682ab4362a5352766130d2560,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9d5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dd8bcbbfb-wngl2_calico-system(5a064a63-743b-4222-a219-9a1bbd1a466a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:31.290057 containerd[1544]: time="2025-10-27T23:40:31.290032249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 23:40:31.529467 containerd[1544]: time="2025-10-27T23:40:31.529335258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:31.530597 containerd[1544]: time="2025-10-27T23:40:31.530532899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 23:40:31.530597 containerd[1544]: time="2025-10-27T23:40:31.530572219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 23:40:31.530816 kubelet[2673]: E1027 23:40:31.530735 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 23:40:31.530816 kubelet[2673]: E1027 23:40:31.530812 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 23:40:31.530963 kubelet[2673]: E1027 23:40:31.530925 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9d5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dd8bcbbfb-wngl2_calico-system(5a064a63-743b-4222-a219-9a1bbd1a466a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:31.532366 kubelet[2673]: E1027 23:40:31.532312 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd8bcbbfb-wngl2" podUID="5a064a63-743b-4222-a219-9a1bbd1a466a" Oct 27 23:40:33.574140 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:53308.service - OpenSSH per-connection server daemon (10.0.0.1:53308). Oct 27 23:40:33.634908 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 53308 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:33.636342 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:33.644113 systemd-logind[1526]: New session 10 of user core. Oct 27 23:40:33.653037 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 23:40:33.824110 sshd[4811]: Connection closed by 10.0.0.1 port 53308 Oct 27 23:40:33.824469 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:33.836134 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:53308.service: Deactivated successfully. Oct 27 23:40:33.837969 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 23:40:33.838675 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Oct 27 23:40:33.841381 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:53316.service - OpenSSH per-connection server daemon (10.0.0.1:53316). Oct 27 23:40:33.842113 systemd-logind[1526]: Removed session 10. Oct 27 23:40:33.902298 sshd[4827]: Accepted publickey for core from 10.0.0.1 port 53316 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:33.903647 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:33.908521 systemd-logind[1526]: New session 11 of user core. Oct 27 23:40:33.916997 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 23:40:34.107788 sshd[4830]: Connection closed by 10.0.0.1 port 53316 Oct 27 23:40:34.108796 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:34.120995 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:53316.service: Deactivated successfully. Oct 27 23:40:34.125110 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 23:40:34.126321 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Oct 27 23:40:34.131035 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:53328.service - OpenSSH per-connection server daemon (10.0.0.1:53328). Oct 27 23:40:34.132723 systemd-logind[1526]: Removed session 11. Oct 27 23:40:34.193553 sshd[4842]: Accepted publickey for core from 10.0.0.1 port 53328 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:34.194953 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:34.199197 systemd-logind[1526]: New session 12 of user core. Oct 27 23:40:34.213991 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 23:40:34.341928 sshd[4845]: Connection closed by 10.0.0.1 port 53328 Oct 27 23:40:34.342435 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:34.346194 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:53328.service: Deactivated successfully. Oct 27 23:40:34.349360 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 23:40:34.350125 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Oct 27 23:40:34.351397 systemd-logind[1526]: Removed session 12. Oct 27 23:40:37.059670 containerd[1544]: time="2025-10-27T23:40:37.059621303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 23:40:37.274851 containerd[1544]: time="2025-10-27T23:40:37.274796813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:37.275738 containerd[1544]: time="2025-10-27T23:40:37.275704573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 23:40:37.275805 containerd[1544]: time="2025-10-27T23:40:37.275758533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:37.275980 kubelet[2673]: E1027 23:40:37.275943 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 23:40:37.276278 kubelet[2673]: E1027 23:40:37.275995 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 23:40:37.276278 kubelet[2673]: E1027 23:40:37.276117 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgbnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9qxlt_calico-system(eca582f5-cc8d-425a-955f-92ba936703d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:37.277620 kubelet[2673]: E1027 23:40:37.277587 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:40:39.365600 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:56784.service - OpenSSH per-connection server daemon (10.0.0.1:56784). Oct 27 23:40:39.424167 sshd[4870]: Accepted publickey for core from 10.0.0.1 port 56784 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:39.425500 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:39.429807 systemd-logind[1526]: New session 13 of user core. Oct 27 23:40:39.441028 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 23:40:39.576567 sshd[4873]: Connection closed by 10.0.0.1 port 56784 Oct 27 23:40:39.577377 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:39.581238 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:56784.service: Deactivated successfully. Oct 27 23:40:39.584325 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 23:40:39.585007 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Oct 27 23:40:39.586938 systemd-logind[1526]: Removed session 13. Oct 27 23:40:40.053157 containerd[1544]: time="2025-10-27T23:40:40.053078977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 23:40:40.267955 containerd[1544]: time="2025-10-27T23:40:40.267905602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:40.268897 containerd[1544]: time="2025-10-27T23:40:40.268860282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 23:40:40.269014 containerd[1544]: time="2025-10-27T23:40:40.268927762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:40.269193 kubelet[2673]: E1027 23:40:40.269141 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:40.269459 kubelet[2673]: E1027 23:40:40.269216 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:40.269459 kubelet[2673]: E1027 23:40:40.269373 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f59658cf9-fcbh6_calico-apiserver(7eea9394-537e-498e-8ee0-ada3b969c833): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:40.270763 kubelet[2673]: E1027 23:40:40.270730 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:40:41.051999 containerd[1544]: time="2025-10-27T23:40:41.051945252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 23:40:41.275636 containerd[1544]: time="2025-10-27T23:40:41.275582837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:41.277116 containerd[1544]: time="2025-10-27T23:40:41.277077717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 23:40:41.277183 containerd[1544]: time="2025-10-27T23:40:41.277163117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:41.277327 kubelet[2673]: E1027 23:40:41.277288 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:41.277571 kubelet[2673]: E1027 23:40:41.277339 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:40:41.277571 kubelet[2673]: E1027 23:40:41.277492 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t9mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f59658cf9-w94zx_calico-apiserver(8e5a6c05-e53d-438e-b01a-ad6295f7d8ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:41.278637 kubelet[2673]: E1027 23:40:41.278608 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:40:43.051645 containerd[1544]: time="2025-10-27T23:40:43.051595503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 23:40:43.261587 containerd[1544]: time="2025-10-27T23:40:43.261543323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:43.262624 containerd[1544]: time="2025-10-27T23:40:43.262582763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 23:40:43.262701 containerd[1544]: time="2025-10-27T23:40:43.262660443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 23:40:43.262845 kubelet[2673]: E1027 23:40:43.262804 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 23:40:43.263100 kubelet[2673]: E1027 23:40:43.262858 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 23:40:43.263181 kubelet[2673]: E1027 23:40:43.263109 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:43.263285 containerd[1544]: time="2025-10-27T23:40:43.263143763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 23:40:43.499453 containerd[1544]: time="2025-10-27T23:40:43.499237305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:43.500864 containerd[1544]: time="2025-10-27T23:40:43.500750746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 23:40:43.500864 containerd[1544]: time="2025-10-27T23:40:43.500808146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 23:40:43.501100 kubelet[2673]: E1027 23:40:43.501065 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 23:40:43.501202 kubelet[2673]: E1027 23:40:43.501184 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 23:40:43.501553 kubelet[2673]: E1027 23:40:43.501475 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w2bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c698d47cd-h2pk7_calico-system(6f5865ca-3aea-44fa-9144-072fef5dde02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:43.501665 containerd[1544]: time="2025-10-27T23:40:43.501538906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 23:40:43.502980 kubelet[2673]: E1027 23:40:43.502944 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:40:43.692563 containerd[1544]: time="2025-10-27T23:40:43.692501484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:43.693402 containerd[1544]: time="2025-10-27T23:40:43.693368564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 23:40:43.693662 kubelet[2673]: E1027 23:40:43.693586 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 23:40:43.693662 kubelet[2673]: E1027 23:40:43.693655 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 23:40:43.693734 containerd[1544]: time="2025-10-27T23:40:43.693438204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 23:40:43.694154 kubelet[2673]: E1027 23:40:43.694113 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:43.695486 kubelet[2673]: E1027 23:40:43.695390 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:44.594272 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Oct 27 23:40:44.656802 sshd[4888]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:44.658306 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:44.662589 systemd-logind[1526]: New session 14 of user core. Oct 27 23:40:44.672085 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 23:40:44.821778 sshd[4891]: Connection closed by 10.0.0.1 port 56796 Oct 27 23:40:44.822311 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:44.826365 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:56796.service: Deactivated successfully. Oct 27 23:40:44.828918 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 23:40:44.829826 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Oct 27 23:40:44.831857 systemd-logind[1526]: Removed session 14. Oct 27 23:40:47.051996 kubelet[2673]: E1027 23:40:47.051891 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd8bcbbfb-wngl2" podUID="5a064a63-743b-4222-a219-9a1bbd1a466a" Oct 27 23:40:48.053071 kubelet[2673]: E1027 23:40:48.051870 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:40:49.255726 containerd[1544]: time="2025-10-27T23:40:49.255620195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcdda18b822fb1d6ab63e6ea8e874c2a6ec62b84adbdac4b44b438188d68a2c4\" id:\"f7c770e562f21fe4c6d767b106d98ff888dd88d869b9a3e1750f4fa548f3e029\" pid:4925 exited_at:{seconds:1761608449 nanos:254880914}" Oct 27 23:40:49.266201 kubelet[2673]: E1027 23:40:49.266166 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:40:49.839211 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:39040.service - OpenSSH per-connection server daemon (10.0.0.1:39040). Oct 27 23:40:49.943833 sshd[4939]: Accepted publickey for core from 10.0.0.1 port 39040 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:49.945526 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:49.950932 systemd-logind[1526]: New session 15 of user core. Oct 27 23:40:49.957994 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 23:40:50.132074 sshd[4942]: Connection closed by 10.0.0.1 port 39040 Oct 27 23:40:50.133102 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:50.146614 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:39040.service: Deactivated successfully. Oct 27 23:40:50.150445 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 23:40:50.151547 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Oct 27 23:40:50.158021 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:39052.service - OpenSSH per-connection server daemon (10.0.0.1:39052). Oct 27 23:40:50.159749 systemd-logind[1526]: Removed session 15. Oct 27 23:40:50.210217 sshd[4956]: Accepted publickey for core from 10.0.0.1 port 39052 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:50.212068 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:50.216874 systemd-logind[1526]: New session 16 of user core. Oct 27 23:40:50.226046 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 23:40:50.459104 sshd[4959]: Connection closed by 10.0.0.1 port 39052 Oct 27 23:40:50.460294 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:50.479250 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:39052.service: Deactivated successfully. Oct 27 23:40:50.481663 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 23:40:50.483285 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Oct 27 23:40:50.485295 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:39068.service - OpenSSH per-connection server daemon (10.0.0.1:39068). Oct 27 23:40:50.490066 systemd-logind[1526]: Removed session 16. Oct 27 23:40:50.549635 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 39068 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:50.551275 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:50.556791 systemd-logind[1526]: New session 17 of user core. Oct 27 23:40:50.562962 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 23:40:51.167984 sshd[4974]: Connection closed by 10.0.0.1 port 39068 Oct 27 23:40:51.168001 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:51.179712 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:39068.service: Deactivated successfully. Oct 27 23:40:51.182610 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 23:40:51.183673 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Oct 27 23:40:51.189134 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:39070.service - OpenSSH per-connection server daemon (10.0.0.1:39070). Oct 27 23:40:51.190986 systemd-logind[1526]: Removed session 17. Oct 27 23:40:51.247452 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 39070 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:51.248946 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:51.253525 systemd-logind[1526]: New session 18 of user core. Oct 27 23:40:51.261976 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 23:40:51.559794 sshd[4997]: Connection closed by 10.0.0.1 port 39070 Oct 27 23:40:51.561077 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:51.569037 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:39070.service: Deactivated successfully. Oct 27 23:40:51.571299 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 23:40:51.573100 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Oct 27 23:40:51.576271 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:39080.service - OpenSSH per-connection server daemon (10.0.0.1:39080). Oct 27 23:40:51.577739 systemd-logind[1526]: Removed session 18. Oct 27 23:40:51.640333 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 39080 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:51.640001 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:51.652479 systemd-logind[1526]: New session 19 of user core. Oct 27 23:40:51.657989 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 23:40:51.789812 sshd[5012]: Connection closed by 10.0.0.1 port 39080 Oct 27 23:40:51.790977 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:51.794527 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:39080.service: Deactivated successfully. Oct 27 23:40:51.796353 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 23:40:51.797138 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Oct 27 23:40:51.798527 systemd-logind[1526]: Removed session 19. Oct 27 23:40:53.052359 kubelet[2673]: E1027 23:40:53.052301 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:40:53.057382 kubelet[2673]: E1027 23:40:53.055679 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:40:56.054391 kubelet[2673]: E1027 23:40:56.054343 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:40:56.804242 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:39086.service - OpenSSH per-connection server daemon (10.0.0.1:39086). Oct 27 23:40:56.868645 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 39086 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:40:56.869982 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:40:56.875321 systemd-logind[1526]: New session 20 of user core. Oct 27 23:40:56.884991 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 23:40:57.009038 sshd[5034]: Connection closed by 10.0.0.1 port 39086 Oct 27 23:40:57.010016 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Oct 27 23:40:57.013443 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Oct 27 23:40:57.013662 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:39086.service: Deactivated successfully. Oct 27 23:40:57.015301 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 23:40:57.017453 systemd-logind[1526]: Removed session 20. Oct 27 23:40:58.053845 kubelet[2673]: E1027 23:40:58.053733 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:40:59.052647 containerd[1544]: time="2025-10-27T23:40:59.052390113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 23:40:59.276750 containerd[1544]: time="2025-10-27T23:40:59.276693792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:40:59.277824 containerd[1544]: time="2025-10-27T23:40:59.277736162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 23:40:59.277824 containerd[1544]: time="2025-10-27T23:40:59.277790682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 23:40:59.278046 kubelet[2673]: E1027 23:40:59.277970 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 23:40:59.278046 kubelet[2673]: E1027 23:40:59.278024 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 23:40:59.278511 kubelet[2673]: E1027 23:40:59.278178 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgbnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9qxlt_calico-system(eca582f5-cc8d-425a-955f-92ba936703d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 23:40:59.279380 kubelet[2673]: E1027 23:40:59.279316 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3" Oct 27 23:41:01.051666 containerd[1544]: time="2025-10-27T23:41:01.051597470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 23:41:01.257952 containerd[1544]: time="2025-10-27T23:41:01.257909692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:01.259597 containerd[1544]: time="2025-10-27T23:41:01.259529746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 23:41:01.259811 containerd[1544]: time="2025-10-27T23:41:01.259571626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 23:41:01.259938 kubelet[2673]: E1027 23:41:01.259898 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 23:41:01.260247 kubelet[2673]: E1027 23:41:01.259949 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 23:41:01.260247 kubelet[2673]: E1027 23:41:01.260058 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e82d8f682ab4362a5352766130d2560,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9d5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dd8bcbbfb-wngl2_calico-system(5a064a63-743b-4222-a219-9a1bbd1a466a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:01.262343 containerd[1544]: time="2025-10-27T23:41:01.262263889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 23:41:01.489848 containerd[1544]: time="2025-10-27T23:41:01.489720129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:01.490834 containerd[1544]: time="2025-10-27T23:41:01.490793019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 23:41:01.490970 containerd[1544]: time="2025-10-27T23:41:01.490895219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 23:41:01.491152 kubelet[2673]: E1027 23:41:01.491104 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 23:41:01.491199 kubelet[2673]: E1027 23:41:01.491166 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 23:41:01.491366 kubelet[2673]: E1027 23:41:01.491318 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9d5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dd8bcbbfb-wngl2_calico-system(5a064a63-743b-4222-a219-9a1bbd1a466a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:01.492833 kubelet[2673]: E1027 23:41:01.492794 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd8bcbbfb-wngl2" podUID="5a064a63-743b-4222-a219-9a1bbd1a466a" Oct 27 23:41:02.021081 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:58126.service - OpenSSH per-connection server daemon (10.0.0.1:58126). Oct 27 23:41:02.086386 sshd[5054]: Accepted publickey for core from 10.0.0.1 port 58126 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:41:02.087714 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:41:02.092416 systemd-logind[1526]: New session 21 of user core. Oct 27 23:41:02.102959 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 23:41:02.215991 sshd[5057]: Connection closed by 10.0.0.1 port 58126 Oct 27 23:41:02.215844 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Oct 27 23:41:02.219531 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:58126.service: Deactivated successfully. Oct 27 23:41:02.222621 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 23:41:02.223849 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Oct 27 23:41:02.225618 systemd-logind[1526]: Removed session 21. Oct 27 23:41:05.050458 kubelet[2673]: E1027 23:41:05.050422 2673 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:41:07.236113 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). Oct 27 23:41:07.307203 sshd[5070]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:41:07.308837 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:41:07.313369 systemd-logind[1526]: New session 22 of user core. Oct 27 23:41:07.323970 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 23:41:07.461641 sshd[5073]: Connection closed by 10.0.0.1 port 58128 Oct 27 23:41:07.461984 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Oct 27 23:41:07.466403 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:58128.service: Deactivated successfully. Oct 27 23:41:07.468825 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 23:41:07.469580 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Oct 27 23:41:07.470645 systemd-logind[1526]: Removed session 22. Oct 27 23:41:08.054804 containerd[1544]: time="2025-10-27T23:41:08.054647615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 23:41:08.257504 containerd[1544]: time="2025-10-27T23:41:08.257423757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:08.258465 containerd[1544]: time="2025-10-27T23:41:08.258376564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 23:41:08.258465 containerd[1544]: time="2025-10-27T23:41:08.258428125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 23:41:08.258682 kubelet[2673]: E1027 23:41:08.258621 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:41:08.258682 kubelet[2673]: E1027 23:41:08.258679 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:41:08.259134 kubelet[2673]: E1027 23:41:08.258890 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t9mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f59658cf9-w94zx_calico-apiserver(8e5a6c05-e53d-438e-b01a-ad6295f7d8ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:08.259476 containerd[1544]: time="2025-10-27T23:41:08.259437172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 23:41:08.260587 kubelet[2673]: E1027 23:41:08.260538 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-w94zx" podUID="8e5a6c05-e53d-438e-b01a-ad6295f7d8ed" Oct 27 23:41:08.482716 containerd[1544]: time="2025-10-27T23:41:08.482546057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:08.483841 containerd[1544]: time="2025-10-27T23:41:08.483804305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 23:41:08.483841 containerd[1544]: time="2025-10-27T23:41:08.483868426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 23:41:08.484071 kubelet[2673]: E1027 23:41:08.483995 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 23:41:08.484116 kubelet[2673]: E1027 23:41:08.484086 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 23:41:08.484766 kubelet[2673]: E1027 23:41:08.484449 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w2bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c698d47cd-h2pk7_calico-system(6f5865ca-3aea-44fa-9144-072fef5dde02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:08.484996 containerd[1544]: time="2025-10-27T23:41:08.484927673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 23:41:08.485838 kubelet[2673]: E1027 23:41:08.485806 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c698d47cd-h2pk7" podUID="6f5865ca-3aea-44fa-9144-072fef5dde02" Oct 27 23:41:08.684764 containerd[1544]: time="2025-10-27T23:41:08.684671354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:08.685679 containerd[1544]: time="2025-10-27T23:41:08.685638041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 23:41:08.685755 containerd[1544]: time="2025-10-27T23:41:08.685720202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 23:41:08.685892 kubelet[2673]: E1027 23:41:08.685856 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:41:08.685957 kubelet[2673]: E1027 23:41:08.685907 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 23:41:08.686068 kubelet[2673]: E1027 23:41:08.686017 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bdrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f59658cf9-fcbh6_calico-apiserver(7eea9394-537e-498e-8ee0-ada3b969c833): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:08.687263 kubelet[2673]: E1027 23:41:08.687185 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f59658cf9-fcbh6" podUID="7eea9394-537e-498e-8ee0-ada3b969c833" Oct 27 23:41:12.481429 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). Oct 27 23:41:12.539307 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:rJd+TU7sFfM9uyplsLTyQyJ9SbIIl66cWvxItQSjr84 Oct 27 23:41:12.540929 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:41:12.546744 systemd-logind[1526]: New session 23 of user core. Oct 27 23:41:12.552985 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 23:41:12.711641 sshd[5092]: Connection closed by 10.0.0.1 port 34528 Oct 27 23:41:12.712239 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Oct 27 23:41:12.718166 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:34528.service: Deactivated successfully. Oct 27 23:41:12.721240 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 23:41:12.722600 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Oct 27 23:41:12.724561 systemd-logind[1526]: Removed session 23. Oct 27 23:41:13.051210 containerd[1544]: time="2025-10-27T23:41:13.051164200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 23:41:13.052763 kubelet[2673]: E1027 23:41:13.051814 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd8bcbbfb-wngl2" podUID="5a064a63-743b-4222-a219-9a1bbd1a466a" Oct 27 23:41:13.265160 containerd[1544]: time="2025-10-27T23:41:13.265113161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:13.266044 containerd[1544]: time="2025-10-27T23:41:13.265997766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 23:41:13.266107 containerd[1544]: time="2025-10-27T23:41:13.266032206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 23:41:13.266606 kubelet[2673]: E1027 23:41:13.266219 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 23:41:13.266606 kubelet[2673]: E1027 23:41:13.266419 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 23:41:13.266606 kubelet[2673]: E1027 23:41:13.266543 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:13.268953 containerd[1544]: time="2025-10-27T23:41:13.268923304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 23:41:13.502795 containerd[1544]: time="2025-10-27T23:41:13.502044343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 23:41:13.503167 containerd[1544]: time="2025-10-27T23:41:13.503124269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 23:41:13.503232 containerd[1544]: time="2025-10-27T23:41:13.503214630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 23:41:13.504819 kubelet[2673]: E1027 23:41:13.504068 2673 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 23:41:13.505083 kubelet[2673]: E1027 23:41:13.504951 2673 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 23:41:13.505200 kubelet[2673]: E1027 23:41:13.505162 2673 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjcbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-shccv_calico-system(c6363763-fd3b-49e5-96bd-c0e1b8f05225): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 23:41:13.506521 kubelet[2673]: E1027 23:41:13.506444 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-shccv" podUID="c6363763-fd3b-49e5-96bd-c0e1b8f05225" Oct 27 23:41:14.054818 kubelet[2673]: E1027 23:41:14.054307 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9qxlt" podUID="eca582f5-cc8d-425a-955f-92ba936703d3"