Oct 30 13:22:38.335087 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 11:31:03 -00 2025 Oct 30 13:22:38.335110 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:22:38.335168 kernel: BIOS-provided physical RAM map: Oct 30 13:22:38.335175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 30 13:22:38.335182 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 30 13:22:38.335189 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 30 13:22:38.335197 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 30 13:22:38.335204 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 30 13:22:38.335215 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 30 13:22:38.335222 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 30 13:22:38.335232 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 13:22:38.335239 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 30 13:22:38.335245 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 13:22:38.335252 kernel: NX (Execute Disable) protection: active Oct 30 13:22:38.335261 kernel: APIC: Static calls initialized Oct 30 13:22:38.335271 kernel: SMBIOS 2.8 present. Oct 30 13:22:38.335281 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 30 13:22:38.335289 kernel: DMI: Memory slots populated: 1/1 Oct 30 13:22:38.335296 kernel: Hypervisor detected: KVM Oct 30 13:22:38.335304 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 30 13:22:38.335311 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 13:22:38.335319 kernel: kvm-clock: using sched offset of 4154693352 cycles Oct 30 13:22:38.335327 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 13:22:38.335335 kernel: tsc: Detected 2794.748 MHz processor Oct 30 13:22:38.335346 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 13:22:38.335354 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 13:22:38.335362 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 30 13:22:38.335370 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 30 13:22:38.335378 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 13:22:38.335386 kernel: Using GB pages for direct mapping Oct 30 13:22:38.335394 kernel: ACPI: Early table checksum verification disabled Oct 30 13:22:38.335402 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 30 13:22:38.335412 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335420 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335428 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335436 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 30 13:22:38.335444 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335452 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335459 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335470 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:22:38.335481 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 30 13:22:38.335489 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 30 13:22:38.335497 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 30 13:22:38.335505 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 30 13:22:38.335515 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 30 13:22:38.335523 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 30 13:22:38.335531 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 30 13:22:38.335539 kernel: No NUMA configuration found Oct 30 13:22:38.335547 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 30 13:22:38.335555 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 30 13:22:38.335566 kernel: Zone ranges: Oct 30 13:22:38.335574 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 13:22:38.335582 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 30 13:22:38.335590 kernel: Normal empty Oct 30 13:22:38.335598 kernel: Device empty Oct 30 13:22:38.335606 kernel: Movable zone start for each node Oct 30 13:22:38.335614 kernel: Early memory node ranges Oct 30 13:22:38.335622 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 30 13:22:38.335632 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 30 13:22:38.335641 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 30 13:22:38.335648 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 13:22:38.335657 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 30 13:22:38.335665 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 30 13:22:38.335675 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 13:22:38.335684 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 13:22:38.335694 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 13:22:38.335702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 13:22:38.335713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 13:22:38.335721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 13:22:38.335729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 13:22:38.335737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 13:22:38.335745 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 13:22:38.335755 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 13:22:38.335763 kernel: TSC deadline timer available Oct 30 13:22:38.335771 kernel: CPU topo: Max. logical packages: 1 Oct 30 13:22:38.335779 kernel: CPU topo: Max. logical dies: 1 Oct 30 13:22:38.335787 kernel: CPU topo: Max. dies per package: 1 Oct 30 13:22:38.335795 kernel: CPU topo: Max. threads per core: 1 Oct 30 13:22:38.335803 kernel: CPU topo: Num. cores per package: 4 Oct 30 13:22:38.335811 kernel: CPU topo: Num. threads per package: 4 Oct 30 13:22:38.335821 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 30 13:22:38.335829 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 13:22:38.335837 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 30 13:22:38.335853 kernel: kvm-guest: setup PV sched yield Oct 30 13:22:38.335861 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 30 13:22:38.335869 kernel: Booting paravirtualized kernel on KVM Oct 30 13:22:38.335877 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 13:22:38.335888 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 30 13:22:38.335896 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 30 13:22:38.335904 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 30 13:22:38.335912 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 30 13:22:38.335920 kernel: kvm-guest: PV spinlocks enabled Oct 30 13:22:38.335928 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 13:22:38.335937 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:22:38.335948 kernel: random: crng init done Oct 30 13:22:38.335956 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 13:22:38.335965 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 13:22:38.335973 kernel: Fallback order for Node 0: 0 Oct 30 13:22:38.335981 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 30 13:22:38.335989 kernel: Policy zone: DMA32 Oct 30 13:22:38.335997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 13:22:38.336008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 13:22:38.336016 kernel: ftrace: allocating 40092 entries in 157 pages Oct 30 13:22:38.336024 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 13:22:38.336032 kernel: Dynamic Preempt: voluntary Oct 30 13:22:38.336040 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 13:22:38.336048 kernel: rcu: RCU event tracing is enabled. Oct 30 13:22:38.336057 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 13:22:38.336067 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 13:22:38.336078 kernel: Rude variant of Tasks RCU enabled. Oct 30 13:22:38.336086 kernel: Tracing variant of Tasks RCU enabled. Oct 30 13:22:38.336094 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 13:22:38.336102 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 13:22:38.336110 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:22:38.336135 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:22:38.336144 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:22:38.336155 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 30 13:22:38.336163 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 13:22:38.336179 kernel: Console: colour VGA+ 80x25 Oct 30 13:22:38.336189 kernel: printk: legacy console [ttyS0] enabled Oct 30 13:22:38.336198 kernel: ACPI: Core revision 20240827 Oct 30 13:22:38.336206 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 13:22:38.336215 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 13:22:38.336223 kernel: x2apic enabled Oct 30 13:22:38.336231 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 13:22:38.336245 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 30 13:22:38.336254 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 30 13:22:38.336262 kernel: kvm-guest: setup PV IPIs Oct 30 13:22:38.336270 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 13:22:38.336282 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 13:22:38.336290 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 30 13:22:38.336298 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 30 13:22:38.336307 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 30 13:22:38.336315 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 30 13:22:38.336323 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 13:22:38.336331 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 13:22:38.336342 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 13:22:38.336351 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 30 13:22:38.336359 kernel: active return thunk: retbleed_return_thunk Oct 30 13:22:38.336367 kernel: RETBleed: Mitigation: untrained return thunk Oct 30 13:22:38.336376 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 13:22:38.336384 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 13:22:38.336393 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 30 13:22:38.336450 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 30 13:22:38.336458 kernel: active return thunk: srso_return_thunk Oct 30 13:22:38.336467 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 30 13:22:38.336475 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 13:22:38.336483 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 13:22:38.336492 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 13:22:38.336500 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 13:22:38.336515 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 30 13:22:38.336524 kernel: Freeing SMP alternatives memory: 32K Oct 30 13:22:38.336532 kernel: pid_max: default: 32768 minimum: 301 Oct 30 13:22:38.336540 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 13:22:38.336549 kernel: landlock: Up and running. Oct 30 13:22:38.336557 kernel: SELinux: Initializing. Oct 30 13:22:38.336568 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:22:38.336584 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:22:38.336592 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 30 13:22:38.336601 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 30 13:22:38.336609 kernel: ... version: 0 Oct 30 13:22:38.336617 kernel: ... bit width: 48 Oct 30 13:22:38.336625 kernel: ... generic registers: 6 Oct 30 13:22:38.336634 kernel: ... value mask: 0000ffffffffffff Oct 30 13:22:38.336649 kernel: ... max period: 00007fffffffffff Oct 30 13:22:38.336658 kernel: ... fixed-purpose events: 0 Oct 30 13:22:38.336666 kernel: ... event mask: 000000000000003f Oct 30 13:22:38.336674 kernel: signal: max sigframe size: 1776 Oct 30 13:22:38.336682 kernel: rcu: Hierarchical SRCU implementation. Oct 30 13:22:38.336691 kernel: rcu: Max phase no-delay instances is 400. Oct 30 13:22:38.336699 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 13:22:38.336714 kernel: smp: Bringing up secondary CPUs ... Oct 30 13:22:38.336723 kernel: smpboot: x86: Booting SMP configuration: Oct 30 13:22:38.336731 kernel: .... node #0, CPUs: #1 #2 #3 Oct 30 13:22:38.336740 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 13:22:38.336748 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 30 13:22:38.336757 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15964K init, 2080K bss, 118472K reserved, 0K cma-reserved) Oct 30 13:22:38.336765 kernel: devtmpfs: initialized Oct 30 13:22:38.336780 kernel: x86/mm: Memory block size: 128MB Oct 30 13:22:38.336788 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 13:22:38.336797 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 13:22:38.336805 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 13:22:38.336816 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 13:22:38.336824 kernel: audit: initializing netlink subsys (disabled) Oct 30 13:22:38.336833 kernel: audit: type=2000 audit(1761830555.354:1): state=initialized audit_enabled=0 res=1 Oct 30 13:22:38.336853 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 13:22:38.336861 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 13:22:38.336870 kernel: cpuidle: using governor menu Oct 30 13:22:38.336878 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 13:22:38.336887 kernel: dca service started, version 1.12.1 Oct 30 13:22:38.336895 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 30 13:22:38.336904 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 30 13:22:38.336912 kernel: PCI: Using configuration type 1 for base access Oct 30 13:22:38.336927 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 13:22:38.336936 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 13:22:38.336944 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 13:22:38.336953 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 13:22:38.336961 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 13:22:38.336969 kernel: ACPI: Added _OSI(Module Device) Oct 30 13:22:38.336978 kernel: ACPI: Added _OSI(Processor Device) Oct 30 13:22:38.336992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 13:22:38.337001 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 13:22:38.337009 kernel: ACPI: Interpreter enabled Oct 30 13:22:38.337017 kernel: ACPI: PM: (supports S0 S3 S5) Oct 30 13:22:38.337026 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 13:22:38.337034 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 13:22:38.337042 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 13:22:38.337057 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 30 13:22:38.337065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 13:22:38.337339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 13:22:38.337525 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 30 13:22:38.337705 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 30 13:22:38.337716 kernel: PCI host bridge to bus 0000:00 Oct 30 13:22:38.337918 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 13:22:38.338203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 13:22:38.338383 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 13:22:38.338546 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 30 13:22:38.338711 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 30 13:22:38.338926 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 30 13:22:38.339109 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 13:22:38.339325 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 30 13:22:38.339513 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 30 13:22:38.339688 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 30 13:22:38.339941 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 30 13:22:38.340129 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 30 13:22:38.340311 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 13:22:38.340496 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 13:22:38.340832 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 30 13:22:38.341030 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 30 13:22:38.341277 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 30 13:22:38.341462 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 13:22:38.341636 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 30 13:22:38.341810 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 30 13:22:38.341995 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 30 13:22:38.342193 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 13:22:38.342430 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 30 13:22:38.342602 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 30 13:22:38.342816 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 30 13:22:38.343001 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 30 13:22:38.343204 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 30 13:22:38.343384 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 30 13:22:38.343580 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 30 13:22:38.343753 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 30 13:22:38.343936 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 30 13:22:38.344132 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 30 13:22:38.344310 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 30 13:22:38.344335 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 13:22:38.344344 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 13:22:38.344352 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 13:22:38.344361 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 13:22:38.344369 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 30 13:22:38.344378 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 30 13:22:38.344386 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 30 13:22:38.344401 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 30 13:22:38.344410 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 30 13:22:38.344423 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 30 13:22:38.344431 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 30 13:22:38.344444 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 30 13:22:38.344456 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 30 13:22:38.344472 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 30 13:22:38.344498 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 30 13:22:38.344513 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 30 13:22:38.344525 kernel: iommu: Default domain type: Translated Oct 30 13:22:38.344534 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 13:22:38.344542 kernel: PCI: Using ACPI for IRQ routing Oct 30 13:22:38.344550 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 13:22:38.344558 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 30 13:22:38.344574 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 30 13:22:38.344890 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 30 13:22:38.345266 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 30 13:22:38.345540 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 13:22:38.345552 kernel: vgaarb: loaded Oct 30 13:22:38.345561 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 13:22:38.345569 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 13:22:38.345589 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 13:22:38.345598 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 13:22:38.345606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 13:22:38.345615 kernel: pnp: PnP ACPI init Oct 30 13:22:38.345808 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 30 13:22:38.345822 kernel: pnp: PnP ACPI: found 6 devices Oct 30 13:22:38.345831 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 13:22:38.345855 kernel: NET: Registered PF_INET protocol family Oct 30 13:22:38.345864 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 13:22:38.345873 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 13:22:38.345881 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 13:22:38.345890 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 13:22:38.345898 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 13:22:38.345907 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 13:22:38.345922 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:22:38.345931 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:22:38.345939 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 13:22:38.345948 kernel: NET: Registered PF_XDP protocol family Oct 30 13:22:38.346112 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 13:22:38.346290 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 13:22:38.346464 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 13:22:38.346625 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 30 13:22:38.346785 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 30 13:22:38.346956 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 30 13:22:38.346968 kernel: PCI: CLS 0 bytes, default 64 Oct 30 13:22:38.346977 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 13:22:38.346985 kernel: Initialise system trusted keyrings Oct 30 13:22:38.347007 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 13:22:38.347016 kernel: Key type asymmetric registered Oct 30 13:22:38.347024 kernel: Asymmetric key parser 'x509' registered Oct 30 13:22:38.347033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 13:22:38.347041 kernel: io scheduler mq-deadline registered Oct 30 13:22:38.347050 kernel: io scheduler kyber registered Oct 30 13:22:38.347058 kernel: io scheduler bfq registered Oct 30 13:22:38.347074 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 13:22:38.347083 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 30 13:22:38.347092 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 30 13:22:38.347100 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 30 13:22:38.347109 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 13:22:38.347131 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 13:22:38.347140 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 13:22:38.347156 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 13:22:38.347165 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 13:22:38.347345 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 30 13:22:38.347357 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 13:22:38.347523 kernel: rtc_cmos 00:04: registered as rtc0 Oct 30 13:22:38.347688 kernel: rtc_cmos 00:04: setting system clock to 2025-10-30T13:22:36 UTC (1761830556) Oct 30 13:22:38.347861 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 30 13:22:38.347885 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 30 13:22:38.347893 kernel: NET: Registered PF_INET6 protocol family Oct 30 13:22:38.347902 kernel: Segment Routing with IPv6 Oct 30 13:22:38.347910 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 13:22:38.347919 kernel: NET: Registered PF_PACKET protocol family Oct 30 13:22:38.347927 kernel: Key type dns_resolver registered Oct 30 13:22:38.347935 kernel: IPI shorthand broadcast: enabled Oct 30 13:22:38.347951 kernel: sched_clock: Marking stable (1168003162, 199060322)->(1417314140, -50250656) Oct 30 13:22:38.347959 kernel: registered taskstats version 1 Oct 30 13:22:38.347968 kernel: Loading compiled-in X.509 certificates Oct 30 13:22:38.347977 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 94f1b718c5ca9e16ea420e725d7bfe648cbb4295' Oct 30 13:22:38.347985 kernel: Demotion targets for Node 0: null Oct 30 13:22:38.347993 kernel: Key type .fscrypt registered Oct 30 13:22:38.348002 kernel: Key type fscrypt-provisioning registered Oct 30 13:22:38.348017 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 13:22:38.348025 kernel: ima: Allocated hash algorithm: sha1 Oct 30 13:22:38.348033 kernel: ima: No architecture policies found Oct 30 13:22:38.348042 kernel: clk: Disabling unused clocks Oct 30 13:22:38.348050 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 30 13:22:38.348059 kernel: Write protecting the kernel read-only data: 45056k Oct 30 13:22:38.348067 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 30 13:22:38.348082 kernel: Run /init as init process Oct 30 13:22:38.348091 kernel: with arguments: Oct 30 13:22:38.348099 kernel: /init Oct 30 13:22:38.348107 kernel: with environment: Oct 30 13:22:38.348129 kernel: HOME=/ Oct 30 13:22:38.348137 kernel: TERM=linux Oct 30 13:22:38.348146 kernel: SCSI subsystem initialized Oct 30 13:22:38.348154 kernel: libata version 3.00 loaded. Oct 30 13:22:38.348370 kernel: ahci 0000:00:1f.2: version 3.0 Oct 30 13:22:38.348439 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 30 13:22:38.348616 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 30 13:22:38.348792 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 30 13:22:38.348991 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 30 13:22:38.349231 kernel: scsi host0: ahci Oct 30 13:22:38.349426 kernel: scsi host1: ahci Oct 30 13:22:38.349620 kernel: scsi host2: ahci Oct 30 13:22:38.349810 kernel: scsi host3: ahci Oct 30 13:22:38.350006 kernel: scsi host4: ahci Oct 30 13:22:38.350340 kernel: scsi host5: ahci Oct 30 13:22:38.350362 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 30 13:22:38.350374 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 30 13:22:38.350386 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 30 13:22:38.350394 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 30 13:22:38.350403 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 30 13:22:38.350412 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 30 13:22:38.350489 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 30 13:22:38.350497 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 30 13:22:38.350506 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 30 13:22:38.350521 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 30 13:22:38.350530 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 30 13:22:38.350539 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 30 13:22:38.350547 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 13:22:38.350563 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 30 13:22:38.350572 kernel: ata3.00: applying bridge limits Oct 30 13:22:38.350580 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 13:22:38.350589 kernel: ata3.00: configured for UDMA/100 Oct 30 13:22:38.350830 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 30 13:22:38.351060 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 30 13:22:38.351304 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 30 13:22:38.351319 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 13:22:38.351328 kernel: GPT:16515071 != 27000831 Oct 30 13:22:38.351337 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 13:22:38.351346 kernel: GPT:16515071 != 27000831 Oct 30 13:22:38.351354 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 13:22:38.351363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 13:22:38.351600 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 30 13:22:38.351614 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 13:22:38.351805 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 30 13:22:38.351817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 13:22:38.351826 kernel: device-mapper: uevent: version 1.0.3 Oct 30 13:22:38.351835 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 13:22:38.351852 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 30 13:22:38.351873 kernel: raid6: avx2x4 gen() 30224 MB/s Oct 30 13:22:38.351881 kernel: raid6: avx2x2 gen() 30713 MB/s Oct 30 13:22:38.351890 kernel: raid6: avx2x1 gen() 25138 MB/s Oct 30 13:22:38.351899 kernel: raid6: using algorithm avx2x2 gen() 30713 MB/s Oct 30 13:22:38.351914 kernel: raid6: .... xor() 17971 MB/s, rmw enabled Oct 30 13:22:38.351923 kernel: raid6: using avx2x2 recovery algorithm Oct 30 13:22:38.351932 kernel: xor: automatically using best checksumming function avx Oct 30 13:22:38.351941 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 13:22:38.351950 kernel: BTRFS: device fsid eda3d582-32f5-4286-9f04-debab6c64300 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (182) Oct 30 13:22:38.351959 kernel: BTRFS info (device dm-0): first mount of filesystem eda3d582-32f5-4286-9f04-debab6c64300 Oct 30 13:22:38.351968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:22:38.351984 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 13:22:38.351998 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 13:22:38.352007 kernel: loop: module loaded Oct 30 13:22:38.352016 kernel: loop0: detected capacity change from 0 to 100136 Oct 30 13:22:38.352024 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 13:22:38.352034 systemd[1]: Successfully made /usr/ read-only. Oct 30 13:22:38.352046 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:22:38.352063 systemd[1]: Detected virtualization kvm. Oct 30 13:22:38.352072 systemd[1]: Detected architecture x86-64. Oct 30 13:22:38.352081 systemd[1]: Running in initrd. Oct 30 13:22:38.352090 systemd[1]: No hostname configured, using default hostname. Oct 30 13:22:38.352100 systemd[1]: Hostname set to . Oct 30 13:22:38.352109 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:22:38.352137 systemd[1]: Queued start job for default target initrd.target. Oct 30 13:22:38.352147 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:22:38.352156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:22:38.352165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:22:38.352176 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 13:22:38.352185 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:22:38.352203 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 13:22:38.352213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 13:22:38.352222 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:22:38.352231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:22:38.352241 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:22:38.352261 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:22:38.352294 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:22:38.352305 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:22:38.352314 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:22:38.352324 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:22:38.352339 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:22:38.352351 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 13:22:38.352361 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 13:22:38.352385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:22:38.352394 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:22:38.352403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:22:38.352412 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:22:38.352422 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 13:22:38.352432 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 13:22:38.352441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:22:38.352457 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 13:22:38.352467 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 13:22:38.352477 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 13:22:38.352486 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:22:38.352495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:22:38.352505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:22:38.352521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:22:38.352531 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 13:22:38.352540 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 13:22:38.352550 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:22:38.352597 systemd-journald[315]: Collecting audit messages is disabled. Oct 30 13:22:38.352619 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:22:38.352629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:22:38.352646 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 13:22:38.352656 systemd-journald[315]: Journal started Oct 30 13:22:38.352675 systemd-journald[315]: Runtime Journal (/run/log/journal/f37557f967354be09858956e319d7b2b) is 6M, max 48.2M, 42.2M free. Oct 30 13:22:38.355137 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:22:38.358355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:22:38.526204 kernel: Bridge firewalling registered Oct 30 13:22:38.526361 systemd-modules-load[319]: Inserted module 'br_netfilter' Oct 30 13:22:38.528270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:22:38.598437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:22:38.604580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 13:22:38.607401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:22:38.614894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:22:38.620538 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 13:22:38.630519 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:22:38.634370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:22:38.637135 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:22:38.648315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:22:38.653964 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 13:22:38.689197 dracut-cmdline[362]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:22:38.705663 systemd-resolved[353]: Positive Trust Anchors: Oct 30 13:22:38.705679 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:22:38.705685 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:22:38.705727 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:22:38.739651 systemd-resolved[353]: Defaulting to hostname 'linux'. Oct 30 13:22:38.741272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:22:38.745195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:22:38.837170 kernel: Loading iSCSI transport class v2.0-870. Oct 30 13:22:38.851149 kernel: iscsi: registered transport (tcp) Oct 30 13:22:38.878149 kernel: iscsi: registered transport (qla4xxx) Oct 30 13:22:38.878177 kernel: QLogic iSCSI HBA Driver Oct 30 13:22:38.906084 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:22:38.931154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:22:38.936768 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:22:39.207576 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 13:22:39.214009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 13:22:39.218765 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 13:22:39.256382 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:22:39.261815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:22:39.298412 systemd-udevd[597]: Using default interface naming scheme 'v257'. Oct 30 13:22:39.313921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:22:39.317198 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 13:22:39.349283 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Oct 30 13:22:39.357669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:22:39.361773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:22:39.380964 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:22:39.384249 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:22:39.508771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:22:39.514926 systemd-networkd[716]: lo: Link UP Oct 30 13:22:39.514936 systemd-networkd[716]: lo: Gained carrier Oct 30 13:22:39.515271 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 13:22:39.516362 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:22:39.523977 systemd[1]: Reached target network.target - Network. Oct 30 13:22:39.572941 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 13:22:39.596869 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 13:22:39.662157 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 13:22:39.662250 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 13:22:39.658415 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:22:39.676371 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 13:22:39.685007 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 13:22:39.689218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:22:39.691394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:22:39.692162 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:22:39.692170 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:22:39.709477 kernel: AES CTR mode by8 optimization enabled Oct 30 13:22:39.692655 systemd-networkd[716]: eth0: Link UP Oct 30 13:22:39.693056 systemd-networkd[716]: eth0: Gained carrier Oct 30 13:22:39.693065 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:22:39.718498 disk-uuid[775]: Primary Header is updated. Oct 30 13:22:39.718498 disk-uuid[775]: Secondary Entries is updated. Oct 30 13:22:39.718498 disk-uuid[775]: Secondary Header is updated. Oct 30 13:22:39.698584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:22:39.702424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:22:39.706175 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:22:39.827412 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 13:22:39.846016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:22:39.852008 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:22:39.856024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:22:39.859742 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:22:39.865003 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 13:22:39.895604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:22:39.992871 systemd-resolved[353]: Detected conflict on linux IN A 10.0.0.72 Oct 30 13:22:39.992890 systemd-resolved[353]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Oct 30 13:22:40.768108 disk-uuid[792]: Warning: The kernel is still using the old partition table. Oct 30 13:22:40.768108 disk-uuid[792]: The new table will be used at the next reboot or after you Oct 30 13:22:40.768108 disk-uuid[792]: run partprobe(8) or kpartx(8) Oct 30 13:22:40.768108 disk-uuid[792]: The operation has completed successfully. Oct 30 13:22:40.786879 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 13:22:40.787037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 13:22:40.790850 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 13:22:40.797299 systemd-networkd[716]: eth0: Gained IPv6LL Oct 30 13:22:40.836164 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Oct 30 13:22:40.836240 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:22:40.839509 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:22:40.843499 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:22:40.843590 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:22:40.852143 kernel: BTRFS info (device vda6): last unmount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:22:40.853635 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 13:22:40.857198 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 13:22:41.464067 ignition[882]: Ignition 2.22.0 Oct 30 13:22:41.465183 ignition[882]: Stage: fetch-offline Oct 30 13:22:41.465382 ignition[882]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:22:41.465401 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:22:41.465602 ignition[882]: parsed url from cmdline: "" Oct 30 13:22:41.465607 ignition[882]: no config URL provided Oct 30 13:22:41.465613 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 13:22:41.465630 ignition[882]: no config at "/usr/lib/ignition/user.ign" Oct 30 13:22:41.465731 ignition[882]: op(1): [started] loading QEMU firmware config module Oct 30 13:22:41.465750 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 13:22:41.485781 ignition[882]: op(1): [finished] loading QEMU firmware config module Oct 30 13:22:41.563279 ignition[882]: parsing config with SHA512: 0b173a145f749712f479a4b2891f3a9a17289e2d9ca1b3809ca7974f6f89fdc056d7a5c6efafaed7b079e96354635f919743fb2b772d032ae9ac3a831d80601c Oct 30 13:22:41.571320 unknown[882]: fetched base config from "system" Oct 30 13:22:41.571339 unknown[882]: fetched user config from "qemu" Oct 30 13:22:41.571854 ignition[882]: fetch-offline: fetch-offline passed Oct 30 13:22:41.571937 ignition[882]: Ignition finished successfully Oct 30 13:22:41.576505 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:22:41.579699 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 13:22:41.580928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 13:22:41.639544 ignition[892]: Ignition 2.22.0 Oct 30 13:22:41.639565 ignition[892]: Stage: kargs Oct 30 13:22:41.639780 ignition[892]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:22:41.639790 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:22:41.640538 ignition[892]: kargs: kargs passed Oct 30 13:22:41.645781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 13:22:41.640588 ignition[892]: Ignition finished successfully Oct 30 13:22:41.648809 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 13:22:41.697428 ignition[900]: Ignition 2.22.0 Oct 30 13:22:41.697445 ignition[900]: Stage: disks Oct 30 13:22:41.697665 ignition[900]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:22:41.697679 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:22:41.698701 ignition[900]: disks: disks passed Oct 30 13:22:41.702041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 13:22:41.698762 ignition[900]: Ignition finished successfully Oct 30 13:22:41.704615 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 13:22:41.707208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 13:22:41.710645 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:22:41.712267 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:22:41.715062 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:22:41.718258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 13:22:41.757305 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 30 13:22:41.765621 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 13:22:41.771152 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 13:22:41.893144 kernel: EXT4-fs (vda9): mounted filesystem 6e47eb19-ed37-4e0f-85fc-4a1fde834fe4 r/w with ordered data mode. Quota mode: none. Oct 30 13:22:41.893351 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 13:22:41.894630 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 13:22:41.897746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:22:41.900884 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 13:22:41.903505 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 13:22:41.903542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 13:22:41.903567 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:22:41.920987 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 13:22:41.927915 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (918) Oct 30 13:22:41.927940 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:22:41.924399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 13:22:41.932665 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:22:41.936339 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:22:41.936359 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:22:41.937799 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:22:41.992150 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 13:22:41.997372 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Oct 30 13:22:42.001830 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 13:22:42.008221 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 13:22:42.118106 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 13:22:42.121695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 13:22:42.124030 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 13:22:42.141349 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 13:22:42.143924 kernel: BTRFS info (device vda6): last unmount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:22:42.161291 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 13:22:42.190304 ignition[1032]: INFO : Ignition 2.22.0 Oct 30 13:22:42.190304 ignition[1032]: INFO : Stage: mount Oct 30 13:22:42.192922 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:22:42.192922 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:22:42.192922 ignition[1032]: INFO : mount: mount passed Oct 30 13:22:42.192922 ignition[1032]: INFO : Ignition finished successfully Oct 30 13:22:42.201859 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 13:22:42.206159 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 13:22:42.895664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:22:42.923168 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1044) Oct 30 13:22:42.923247 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:22:42.926566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:22:42.930651 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:22:42.930762 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:22:42.932469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:22:42.991090 ignition[1061]: INFO : Ignition 2.22.0 Oct 30 13:22:42.991090 ignition[1061]: INFO : Stage: files Oct 30 13:22:42.994030 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:22:42.994030 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:22:42.994030 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Oct 30 13:22:42.999400 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 13:22:42.999400 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 13:22:43.004144 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 13:22:43.006624 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 13:22:43.009528 unknown[1061]: wrote ssh authorized keys file for user: core Oct 30 13:22:43.011472 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 13:22:43.011472 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 30 13:22:43.011472 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 30 13:22:43.067868 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 13:22:43.155860 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 30 13:22:43.155860 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:22:43.162512 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:22:43.192286 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:22:43.192286 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:22:43.192286 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 30 13:22:43.616904 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 13:22:44.281158 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:22:44.281158 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 13:22:44.287212 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 30 13:22:44.290405 ignition[1061]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 13:22:44.311631 ignition[1061]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:22:44.315633 ignition[1061]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:22:44.318155 ignition[1061]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 13:22:44.318155 ignition[1061]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 30 13:22:44.318155 ignition[1061]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 13:22:44.318155 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:22:44.318155 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:22:44.318155 ignition[1061]: INFO : files: files passed Oct 30 13:22:44.318155 ignition[1061]: INFO : Ignition finished successfully Oct 30 13:22:44.329357 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 13:22:44.333508 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 13:22:44.339286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 13:22:44.350224 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 13:22:44.350366 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 13:22:44.357955 initrd-setup-root-after-ignition[1092]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 13:22:44.363204 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:22:44.363204 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:22:44.368412 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:22:44.373896 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:22:44.374754 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 13:22:44.381272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 13:22:44.478985 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 13:22:44.479433 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 13:22:44.480975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 13:22:44.485954 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 13:22:44.492724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 13:22:44.494954 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 13:22:44.550853 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:22:44.558901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 13:22:44.607326 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:22:44.607720 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:22:44.609446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:22:44.615783 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 13:22:44.617171 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 13:22:44.617445 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:22:44.618978 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 13:22:44.619914 systemd[1]: Stopped target basic.target - Basic System. Oct 30 13:22:44.620818 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 13:22:44.632856 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:22:44.633805 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 13:22:44.635053 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:22:44.635684 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 13:22:44.636511 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:22:44.637282 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 13:22:44.638216 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 13:22:44.639052 systemd[1]: Stopped target swap.target - Swaps. Oct 30 13:22:44.639927 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 13:22:44.640132 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:22:44.665757 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:22:44.669377 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:22:44.670776 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 13:22:44.675076 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:22:44.676284 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 13:22:44.676526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 13:22:44.677477 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 13:22:44.677742 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:22:44.678911 systemd[1]: Stopped target paths.target - Path Units. Oct 30 13:22:44.689099 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 13:22:44.696622 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:22:44.701485 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 13:22:44.702535 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 13:22:44.705110 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 13:22:44.705243 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:22:44.707991 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 13:22:44.708083 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:22:44.710923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 13:22:44.711081 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:22:44.714542 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 13:22:44.714695 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 13:22:44.719084 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 13:22:44.723547 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 13:22:44.724593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 13:22:44.724779 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:22:44.725600 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 13:22:44.725772 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:22:44.732720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 13:22:44.732827 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:22:44.751792 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 13:22:44.751947 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 13:22:44.782833 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 13:22:44.788839 ignition[1118]: INFO : Ignition 2.22.0 Oct 30 13:22:44.790383 ignition[1118]: INFO : Stage: umount Oct 30 13:22:44.790383 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:22:44.790383 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:22:44.795249 ignition[1118]: INFO : umount: umount passed Oct 30 13:22:44.795249 ignition[1118]: INFO : Ignition finished successfully Oct 30 13:22:44.796967 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 13:22:44.797146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 13:22:44.797678 systemd[1]: Stopped target network.target - Network. Oct 30 13:22:44.800448 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 13:22:44.800510 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 13:22:44.802935 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 13:22:44.802995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 13:22:44.806745 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 13:22:44.806805 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 13:22:44.809571 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 13:22:44.809632 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 13:22:44.812672 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 13:22:44.815684 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 13:22:44.833090 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 13:22:44.833276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 13:22:44.840371 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 13:22:44.840568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 13:22:44.847227 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 13:22:44.851252 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 13:22:44.851315 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:22:44.856013 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 13:22:44.857070 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 13:22:44.857171 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:22:44.860020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 13:22:44.860094 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:22:44.864069 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 13:22:44.864157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 13:22:44.868763 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:22:44.872017 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 13:22:44.877615 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 13:22:44.879882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 13:22:44.880012 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 13:22:44.900241 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 13:22:44.910360 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:22:44.914674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 13:22:44.914733 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 13:22:44.915659 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 13:22:44.915697 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:22:44.918540 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 13:22:44.918604 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:22:44.924290 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 13:22:44.924343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 13:22:44.928743 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 13:22:44.928798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:22:44.934494 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 13:22:44.935655 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 13:22:44.935713 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:22:44.938829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 13:22:44.938882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:22:44.942671 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 30 13:22:44.942727 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:22:44.943471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 13:22:44.943517 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:22:44.949800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:22:44.949855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:22:44.963244 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 13:22:44.963415 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 13:22:44.967028 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 13:22:44.967160 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 13:22:44.968330 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 13:22:44.972534 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 13:22:45.002329 systemd[1]: Switching root. Oct 30 13:22:45.047346 systemd-journald[315]: Journal stopped Oct 30 13:22:46.416765 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Oct 30 13:22:46.416847 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 13:22:46.416952 kernel: SELinux: policy capability open_perms=1 Oct 30 13:22:46.416965 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 13:22:46.416977 kernel: SELinux: policy capability always_check_network=0 Oct 30 13:22:46.416989 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 13:22:46.417002 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 13:22:46.417015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 13:22:46.417043 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 13:22:46.417056 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 13:22:46.417068 kernel: audit: type=1403 audit(1761830565.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 13:22:46.417082 systemd[1]: Successfully loaded SELinux policy in 72.060ms. Oct 30 13:22:46.417102 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.347ms. Oct 30 13:22:46.417134 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:22:46.417149 systemd[1]: Detected virtualization kvm. Oct 30 13:22:46.417171 systemd[1]: Detected architecture x86-64. Oct 30 13:22:46.417190 systemd[1]: Detected first boot. Oct 30 13:22:46.417203 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:22:46.417217 zram_generator::config[1163]: No configuration found. Oct 30 13:22:46.417242 kernel: Guest personality initialized and is inactive Oct 30 13:22:46.417254 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 13:22:46.417266 kernel: Initialized host personality Oct 30 13:22:46.417285 kernel: NET: Registered PF_VSOCK protocol family Oct 30 13:22:46.417298 systemd[1]: Populated /etc with preset unit settings. Oct 30 13:22:46.417312 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 13:22:46.417325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 13:22:46.417338 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 13:22:46.417352 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 13:22:46.417365 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 13:22:46.417384 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 13:22:46.417397 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 13:22:46.417410 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 13:22:46.417428 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 13:22:46.417445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 13:22:46.417458 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 13:22:46.417472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:22:46.417492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:22:46.417505 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 13:22:46.417519 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 13:22:46.417534 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 13:22:46.417548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:22:46.417561 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 13:22:46.417589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:22:46.417603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:22:46.417617 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 13:22:46.417630 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 13:22:46.417646 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 13:22:46.417660 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 13:22:46.417673 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:22:46.417694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:22:46.417708 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:22:46.417721 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:22:46.417734 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 13:22:46.417747 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 13:22:46.417767 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 13:22:46.417780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:22:46.417800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:22:46.417813 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:22:46.417826 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 13:22:46.417839 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 13:22:46.417851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 13:22:46.417864 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 13:22:46.417877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:46.417898 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 13:22:46.417911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 13:22:46.417924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 13:22:46.417937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 13:22:46.417950 systemd[1]: Reached target machines.target - Containers. Oct 30 13:22:46.417963 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 13:22:46.417976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:22:46.417995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:22:46.418009 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 13:22:46.418024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:22:46.418037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:22:46.418050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:22:46.418064 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 13:22:46.418077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:22:46.418097 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 13:22:46.418110 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 13:22:46.418139 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 13:22:46.418151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 13:22:46.418164 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 13:22:46.418178 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:22:46.418199 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:22:46.418213 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:22:46.418226 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:22:46.418238 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 13:22:46.418251 kernel: ACPI: bus type drm_connector registered Oct 30 13:22:46.418271 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 13:22:46.418284 kernel: fuse: init (API version 7.41) Oct 30 13:22:46.418299 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:22:46.418312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:46.418325 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 13:22:46.418339 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 13:22:46.418378 systemd-journald[1245]: Collecting audit messages is disabled. Oct 30 13:22:46.418406 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 13:22:46.418421 systemd-journald[1245]: Journal started Oct 30 13:22:46.418443 systemd-journald[1245]: Runtime Journal (/run/log/journal/f37557f967354be09858956e319d7b2b) is 6M, max 48.2M, 42.2M free. Oct 30 13:22:46.074294 systemd[1]: Queued start job for default target multi-user.target. Oct 30 13:22:46.090542 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 13:22:46.091155 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 13:22:46.422503 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:22:46.425391 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 13:22:46.427297 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 13:22:46.429333 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 13:22:46.431310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 13:22:46.433677 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:22:46.436044 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 13:22:46.436325 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 13:22:46.438538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:22:46.438777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:22:46.440945 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:22:46.441215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:22:46.443274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:22:46.443498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:22:46.445902 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 13:22:46.446176 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 13:22:46.448250 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:22:46.448477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:22:46.450566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:22:46.452878 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:22:46.456035 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 13:22:46.458600 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 13:22:46.476771 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:22:46.478882 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 13:22:46.482271 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 13:22:46.485157 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 13:22:46.487004 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 13:22:46.487100 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:22:46.489749 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 13:22:46.492107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:22:46.495840 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 13:22:46.500901 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 13:22:46.503044 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:22:46.505460 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 13:22:46.509262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:22:46.510541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:22:46.514238 systemd-journald[1245]: Time spent on flushing to /var/log/journal/f37557f967354be09858956e319d7b2b is 30.416ms for 966 entries. Oct 30 13:22:46.514238 systemd-journald[1245]: System Journal (/var/log/journal/f37557f967354be09858956e319d7b2b) is 8M, max 163.5M, 155.5M free. Oct 30 13:22:46.564366 systemd-journald[1245]: Received client request to flush runtime journal. Oct 30 13:22:46.564426 kernel: loop1: detected capacity change from 0 to 224512 Oct 30 13:22:46.514370 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 13:22:46.519378 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:22:46.522662 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:22:46.525673 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 13:22:46.528748 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 13:22:46.535070 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 13:22:46.538645 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 13:22:46.543398 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 13:22:46.548368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:22:46.553385 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 30 13:22:46.553400 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 30 13:22:46.557799 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:22:46.562316 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 13:22:46.575241 kernel: loop2: detected capacity change from 0 to 111544 Oct 30 13:22:46.574339 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 13:22:46.586666 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 13:22:46.601108 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 13:22:46.605261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:22:46.610219 kernel: loop3: detected capacity change from 0 to 128912 Oct 30 13:22:46.610524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:22:46.626271 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 13:22:46.637173 kernel: loop4: detected capacity change from 0 to 224512 Oct 30 13:22:46.638471 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 30 13:22:46.638491 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 30 13:22:46.644725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:22:46.651194 kernel: loop5: detected capacity change from 0 to 111544 Oct 30 13:22:46.660174 kernel: loop6: detected capacity change from 0 to 128912 Oct 30 13:22:46.669041 (sd-merge)[1307]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 30 13:22:46.672480 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 13:22:46.677937 (sd-merge)[1307]: Merged extensions into '/usr'. Oct 30 13:22:46.683450 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 13:22:46.683472 systemd[1]: Reloading... Oct 30 13:22:46.757758 systemd-resolved[1302]: Positive Trust Anchors: Oct 30 13:22:46.758181 systemd-resolved[1302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:22:46.758233 systemd-resolved[1302]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:22:46.758303 systemd-resolved[1302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:22:46.764778 systemd-resolved[1302]: Defaulting to hostname 'linux'. Oct 30 13:22:46.766170 zram_generator::config[1345]: No configuration found. Oct 30 13:22:46.973413 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 13:22:46.973987 systemd[1]: Reloading finished in 290 ms. Oct 30 13:22:47.008941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:22:47.011281 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 13:22:47.016333 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:22:47.040688 systemd[1]: Starting ensure-sysext.service... Oct 30 13:22:47.042996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:22:47.065726 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Oct 30 13:22:47.065871 systemd[1]: Reloading... Oct 30 13:22:47.075600 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 13:22:47.075965 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 13:22:47.076366 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 13:22:47.076745 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 13:22:47.077779 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 13:22:47.078160 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Oct 30 13:22:47.078305 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Oct 30 13:22:47.084557 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:22:47.084640 systemd-tmpfiles[1379]: Skipping /boot Oct 30 13:22:47.096601 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:22:47.096613 systemd-tmpfiles[1379]: Skipping /boot Oct 30 13:22:47.133160 zram_generator::config[1406]: No configuration found. Oct 30 13:22:47.319488 systemd[1]: Reloading finished in 253 ms. Oct 30 13:22:47.342071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 13:22:47.370304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:22:47.382285 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:22:47.385205 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 13:22:47.407266 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 13:22:47.412363 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 13:22:47.416088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:22:47.422366 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 13:22:47.426870 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:47.427034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:22:47.430423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:22:47.434426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:22:47.448336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:22:47.451286 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:22:47.451404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:22:47.451503 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:47.452917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:22:47.453178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:22:47.458966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:22:47.459442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:22:47.471302 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 13:22:47.475031 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:22:47.475313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:22:47.484792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:47.485012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:22:47.488243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:22:47.491379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:22:47.495470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:22:47.496767 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Oct 30 13:22:47.497391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:22:47.497559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:22:47.497708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:47.500751 augenrules[1481]: No rules Oct 30 13:22:47.502484 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:22:47.502796 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:22:47.506192 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 13:22:47.509056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:22:47.509408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:22:47.512459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:22:47.512779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:22:47.515176 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:22:47.515554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:22:47.525299 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 13:22:47.532627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:47.532851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:22:47.534305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:22:47.536113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:22:47.536168 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:22:47.536223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:22:47.536287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:22:47.536331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 13:22:47.536352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:22:47.536842 systemd[1]: Finished ensure-sysext.service. Oct 30 13:22:47.541344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:22:47.548374 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:22:47.555344 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 13:22:47.557782 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:22:47.558049 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:22:47.639702 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 13:22:47.645364 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 13:22:47.645473 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 13:22:47.686226 systemd-networkd[1505]: lo: Link UP Oct 30 13:22:47.687297 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 13:22:47.686239 systemd-networkd[1505]: lo: Gained carrier Oct 30 13:22:47.695786 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:22:47.695800 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:22:47.697064 systemd-networkd[1505]: eth0: Link UP Oct 30 13:22:47.697629 systemd-networkd[1505]: eth0: Gained carrier Oct 30 13:22:47.697643 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:22:47.705190 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:22:47.711364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:22:47.721627 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:22:47.727510 systemd-timesyncd[1507]: Network configuration changed, trying to establish connection. Oct 30 13:22:47.728313 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 13:22:47.728370 systemd-timesyncd[1507]: Initial clock synchronization to Thu 2025-10-30 13:22:47.851584 UTC. Oct 30 13:22:47.744909 systemd[1]: Reached target network.target - Network. Oct 30 13:22:47.749316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 13:22:47.752148 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 13:22:47.757479 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 30 13:22:47.757850 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 13:22:47.758304 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 13:22:47.765341 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 13:22:47.783942 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 13:22:47.808144 kernel: ACPI: button: Power Button [PWRF] Oct 30 13:22:47.863544 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 13:22:47.903474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:22:48.077347 kernel: kvm_amd: TSC scaling supported Oct 30 13:22:48.077440 kernel: kvm_amd: Nested Virtualization enabled Oct 30 13:22:48.077490 kernel: kvm_amd: Nested Paging enabled Oct 30 13:22:48.077509 kernel: kvm_amd: LBR virtualization supported Oct 30 13:22:48.078504 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 30 13:22:48.079613 kernel: kvm_amd: Virtual GIF supported Oct 30 13:22:48.121159 kernel: EDAC MC: Ver: 3.0.0 Oct 30 13:22:48.149631 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 13:22:48.157485 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 13:22:48.210669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:22:48.217179 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 13:22:48.256484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 13:22:48.258752 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:22:48.260664 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 13:22:48.262724 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 13:22:48.264797 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 13:22:48.266879 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 13:22:48.269025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 13:22:48.271142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 13:22:48.273255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 13:22:48.273294 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:22:48.274814 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:22:48.277496 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 13:22:48.281075 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 13:22:48.284858 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 13:22:48.287041 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 13:22:48.289047 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 13:22:48.295969 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 13:22:48.298193 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 13:22:48.300900 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 13:22:48.303394 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:22:48.304948 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:22:48.306489 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:22:48.306524 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:22:48.307655 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 13:22:48.310544 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 13:22:48.313023 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 13:22:48.316389 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 13:22:48.318308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 13:22:48.320067 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 13:22:48.326291 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 13:22:48.330266 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 13:22:48.333712 jq[1569]: false Oct 30 13:22:48.333116 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 13:22:48.336328 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 13:22:48.341250 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 13:22:48.342547 extend-filesystems[1570]: Found /dev/vda6 Oct 30 13:22:48.345832 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Oct 30 13:22:48.343881 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Oct 30 13:22:48.349946 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 13:22:48.351757 extend-filesystems[1570]: Found /dev/vda9 Oct 30 13:22:48.351633 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 13:22:48.352119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 13:22:48.354271 extend-filesystems[1570]: Checking size of /dev/vda9 Oct 30 13:22:48.355347 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 13:22:48.358196 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 13:22:48.358172 oslogin_cache_refresh[1571]: Failure getting users, quitting Oct 30 13:22:48.360634 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Oct 30 13:22:48.360634 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 13:22:48.360634 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Oct 30 13:22:48.358201 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 13:22:48.360044 oslogin_cache_refresh[1571]: Refreshing group entry cache Oct 30 13:22:48.367071 extend-filesystems[1570]: Resized partition /dev/vda9 Oct 30 13:22:48.368904 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Oct 30 13:22:48.368904 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 13:22:48.367707 oslogin_cache_refresh[1571]: Failure getting groups, quitting Oct 30 13:22:48.367722 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 13:22:48.370380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 13:22:48.373167 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 13:22:48.373437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 13:22:48.373830 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 13:22:48.374123 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 13:22:48.375447 extend-filesystems[1597]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 13:22:48.380907 jq[1591]: true Oct 30 13:22:48.377821 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 13:22:48.378078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 13:22:48.385145 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 30 13:22:48.384579 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 13:22:48.384886 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 13:22:48.405602 update_engine[1587]: I20251030 13:22:48.405516 1587 main.cc:92] Flatcar Update Engine starting Oct 30 13:22:48.408146 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 30 13:22:48.433235 jq[1601]: true Oct 30 13:22:48.438991 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 13:22:48.438991 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 13:22:48.438991 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 30 13:22:48.438610 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 13:22:48.448434 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Oct 30 13:22:48.439196 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 13:22:48.457842 tar[1599]: linux-amd64/LICENSE Oct 30 13:22:48.457337 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 13:22:48.456928 dbus-daemon[1567]: [system] SELinux support is enabled Oct 30 13:22:48.461642 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 13:22:48.461674 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 13:22:48.462108 tar[1599]: linux-amd64/helm Oct 30 13:22:48.465349 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 13:22:48.465370 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 13:22:48.468525 systemd[1]: Started update-engine.service - Update Engine. Oct 30 13:22:48.483371 update_engine[1587]: I20251030 13:22:48.483243 1587 update_check_scheduler.cc:74] Next update check in 3m4s Oct 30 13:22:48.492435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 13:22:48.532292 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 13:22:48.532324 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 13:22:48.533404 systemd-logind[1582]: New seat seat0. Oct 30 13:22:48.535696 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 13:22:48.556406 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Oct 30 13:22:48.560107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 13:22:48.563837 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 13:22:48.763084 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 13:22:48.860618 containerd[1604]: time="2025-10-30T13:22:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 13:22:48.864026 containerd[1604]: time="2025-10-30T13:22:48.863987455Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 13:22:48.879222 containerd[1604]: time="2025-10-30T13:22:48.879176781Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.834µs" Oct 30 13:22:48.879434 containerd[1604]: time="2025-10-30T13:22:48.879395441Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 13:22:48.879541 containerd[1604]: time="2025-10-30T13:22:48.879519238Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 13:22:48.879864 containerd[1604]: time="2025-10-30T13:22:48.879842654Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 13:22:48.879956 containerd[1604]: time="2025-10-30T13:22:48.879938890Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 13:22:48.880036 containerd[1604]: time="2025-10-30T13:22:48.880017648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:22:48.880194 containerd[1604]: time="2025-10-30T13:22:48.880169170Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:22:48.880521 containerd[1604]: time="2025-10-30T13:22:48.880502258Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:22:48.880822 containerd[1604]: time="2025-10-30T13:22:48.880795407Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:22:48.880956 containerd[1604]: time="2025-10-30T13:22:48.880939791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:22:48.882484 containerd[1604]: time="2025-10-30T13:22:48.882465029Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:22:48.882539 containerd[1604]: time="2025-10-30T13:22:48.882526585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 13:22:48.882715 containerd[1604]: time="2025-10-30T13:22:48.882697521Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 13:22:48.885471 containerd[1604]: time="2025-10-30T13:22:48.885449952Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:22:48.885579 containerd[1604]: time="2025-10-30T13:22:48.885562594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:22:48.885629 containerd[1604]: time="2025-10-30T13:22:48.885616840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 13:22:48.885699 containerd[1604]: time="2025-10-30T13:22:48.885685927Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 13:22:48.885921 containerd[1604]: time="2025-10-30T13:22:48.885905707Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 13:22:48.886042 containerd[1604]: time="2025-10-30T13:22:48.886026840Z" level=info msg="metadata content store policy set" policy=shared Oct 30 13:22:48.892785 containerd[1604]: time="2025-10-30T13:22:48.892764914Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 13:22:48.892920 containerd[1604]: time="2025-10-30T13:22:48.892901190Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 13:22:48.892994 containerd[1604]: time="2025-10-30T13:22:48.892980878Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 13:22:48.893053 containerd[1604]: time="2025-10-30T13:22:48.893040839Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 13:22:48.893106 containerd[1604]: time="2025-10-30T13:22:48.893094337Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 13:22:48.893183 containerd[1604]: time="2025-10-30T13:22:48.893169613Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 13:22:48.893237 containerd[1604]: time="2025-10-30T13:22:48.893225495Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 13:22:48.893309 containerd[1604]: time="2025-10-30T13:22:48.893294804Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 13:22:48.893393 containerd[1604]: time="2025-10-30T13:22:48.893377985Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 13:22:48.893448 containerd[1604]: time="2025-10-30T13:22:48.893436209Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 13:22:48.893496 containerd[1604]: time="2025-10-30T13:22:48.893485003Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 13:22:48.893546 containerd[1604]: time="2025-10-30T13:22:48.893535322Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 13:22:48.893707 containerd[1604]: time="2025-10-30T13:22:48.893689296Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 13:22:48.893778 containerd[1604]: time="2025-10-30T13:22:48.893765704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 13:22:48.893843 containerd[1604]: time="2025-10-30T13:22:48.893830087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 13:22:48.893897 containerd[1604]: time="2025-10-30T13:22:48.893885826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 13:22:48.893968 containerd[1604]: time="2025-10-30T13:22:48.893953591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 13:22:48.894025 containerd[1604]: time="2025-10-30T13:22:48.894013238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 13:22:48.894085 containerd[1604]: time="2025-10-30T13:22:48.894073420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894157066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894171695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894182236Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894192110Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894259945Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894275160Z" level=info msg="Start snapshots syncer" Oct 30 13:22:48.894862 containerd[1604]: time="2025-10-30T13:22:48.894302682Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 13:22:48.895023 containerd[1604]: time="2025-10-30T13:22:48.894532568Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 13:22:48.895023 containerd[1604]: time="2025-10-30T13:22:48.894581170Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 13:22:48.897443 containerd[1604]: time="2025-10-30T13:22:48.897419428Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 13:22:48.897625 containerd[1604]: time="2025-10-30T13:22:48.897606710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 13:22:48.897692 containerd[1604]: time="2025-10-30T13:22:48.897679603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 13:22:48.897743 containerd[1604]: time="2025-10-30T13:22:48.897731719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 13:22:48.897791 containerd[1604]: time="2025-10-30T13:22:48.897779483Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 13:22:48.897853 containerd[1604]: time="2025-10-30T13:22:48.897839889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 13:22:48.897905 containerd[1604]: time="2025-10-30T13:22:48.897893457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 13:22:48.897953 containerd[1604]: time="2025-10-30T13:22:48.897942464Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 13:22:48.898034 containerd[1604]: time="2025-10-30T13:22:48.898020062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 13:22:48.898090 containerd[1604]: time="2025-10-30T13:22:48.898078993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 13:22:48.898174 containerd[1604]: time="2025-10-30T13:22:48.898160175Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 13:22:48.898321 containerd[1604]: time="2025-10-30T13:22:48.898304024Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898356452Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898375311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898385317Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898394504Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898406357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898417443Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898436160Z" level=info msg="runtime interface created" Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898443015Z" level=info msg="created NRI interface" Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898452273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898476837Z" level=info msg="Connect containerd service" Oct 30 13:22:48.899349 containerd[1604]: time="2025-10-30T13:22:48.898511779Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 13:22:48.899920 containerd[1604]: time="2025-10-30T13:22:48.899897289Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 13:22:48.990445 systemd-networkd[1505]: eth0: Gained IPv6LL Oct 30 13:22:48.996359 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 13:22:49.012408 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 13:22:49.016612 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 13:22:49.023358 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 13:22:49.032572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:22:49.063992 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 13:22:49.066883 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 13:22:49.071373 containerd[1604]: time="2025-10-30T13:22:49.071326339Z" level=info msg="Start subscribing containerd event" Oct 30 13:22:49.071429 containerd[1604]: time="2025-10-30T13:22:49.071395676Z" level=info msg="Start recovering state" Oct 30 13:22:49.071527 containerd[1604]: time="2025-10-30T13:22:49.071506831Z" level=info msg="Start event monitor" Oct 30 13:22:49.071527 containerd[1604]: time="2025-10-30T13:22:49.071526851Z" level=info msg="Start cni network conf syncer for default" Oct 30 13:22:49.071578 containerd[1604]: time="2025-10-30T13:22:49.071534904Z" level=info msg="Start streaming server" Oct 30 13:22:49.071578 containerd[1604]: time="2025-10-30T13:22:49.071550021Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 13:22:49.071578 containerd[1604]: time="2025-10-30T13:22:49.071558487Z" level=info msg="runtime interface starting up..." Oct 30 13:22:49.071578 containerd[1604]: time="2025-10-30T13:22:49.071564634Z" level=info msg="starting plugins..." Oct 30 13:22:49.071578 containerd[1604]: time="2025-10-30T13:22:49.071576873Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 13:22:49.077220 containerd[1604]: time="2025-10-30T13:22:49.076745964Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 13:22:49.077400 containerd[1604]: time="2025-10-30T13:22:49.077374808Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 13:22:49.080239 containerd[1604]: time="2025-10-30T13:22:49.080213818Z" level=info msg="containerd successfully booted in 0.220342s" Oct 30 13:22:49.086402 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 13:22:49.088545 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 13:22:49.100197 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 13:22:49.100485 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 13:22:49.106264 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 13:22:49.115279 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 13:22:49.115618 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 13:22:49.119744 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 13:22:49.123581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 13:22:49.127183 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 13:22:49.134450 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 13:22:49.138406 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 13:22:49.141358 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 13:22:49.199320 tar[1599]: linux-amd64/README.md Oct 30 13:22:49.230628 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 13:22:50.136840 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 13:22:50.140002 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:52464.service - OpenSSH per-connection server daemon (10.0.0.1:52464). Oct 30 13:22:50.223080 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 52464 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:50.224903 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:50.231748 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 13:22:50.234727 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 13:22:50.242786 systemd-logind[1582]: New session 1 of user core. Oct 30 13:22:50.414739 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 13:22:50.420588 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 13:22:50.440941 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 13:22:50.444535 systemd-logind[1582]: New session c1 of user core. Oct 30 13:22:50.449670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:22:50.452066 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 13:22:50.469490 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:22:50.589222 systemd[1709]: Queued start job for default target default.target. Oct 30 13:22:50.602341 systemd[1709]: Created slice app.slice - User Application Slice. Oct 30 13:22:50.602382 systemd[1709]: Reached target paths.target - Paths. Oct 30 13:22:50.602448 systemd[1709]: Reached target timers.target - Timers. Oct 30 13:22:50.604594 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 13:22:50.618105 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 13:22:50.618285 systemd[1709]: Reached target sockets.target - Sockets. Oct 30 13:22:50.618331 systemd[1709]: Reached target basic.target - Basic System. Oct 30 13:22:50.618373 systemd[1709]: Reached target default.target - Main User Target. Oct 30 13:22:50.618415 systemd[1709]: Startup finished in 164ms. Oct 30 13:22:50.618844 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 13:22:50.632279 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 13:22:50.634226 systemd[1]: Startup finished in 2.542s (kernel) + 7.534s (initrd) + 5.208s (userspace) = 15.286s. Oct 30 13:22:50.654178 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:52474.service - OpenSSH per-connection server daemon (10.0.0.1:52474). Oct 30 13:22:50.712966 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 52474 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:50.714367 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:50.719119 systemd-logind[1582]: New session 2 of user core. Oct 30 13:22:50.725267 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 13:22:50.739850 sshd[1734]: Connection closed by 10.0.0.1 port 52474 Oct 30 13:22:50.741281 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Oct 30 13:22:50.749805 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:52474.service: Deactivated successfully. Oct 30 13:22:50.751957 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 13:22:50.752709 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Oct 30 13:22:50.755671 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:52488.service - OpenSSH per-connection server daemon (10.0.0.1:52488). Oct 30 13:22:50.756409 systemd-logind[1582]: Removed session 2. Oct 30 13:22:50.810142 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 52488 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:50.811943 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:50.816692 systemd-logind[1582]: New session 3 of user core. Oct 30 13:22:50.827271 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 13:22:50.837552 sshd[1744]: Connection closed by 10.0.0.1 port 52488 Oct 30 13:22:50.838048 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Oct 30 13:22:50.849715 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:52488.service: Deactivated successfully. Oct 30 13:22:50.851978 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 13:22:50.852747 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Oct 30 13:22:50.855896 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:52502.service - OpenSSH per-connection server daemon (10.0.0.1:52502). Oct 30 13:22:50.856563 systemd-logind[1582]: Removed session 3. Oct 30 13:22:50.917695 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 52502 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:50.919593 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:50.924056 systemd-logind[1582]: New session 4 of user core. Oct 30 13:22:50.941253 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 13:22:50.957457 sshd[1753]: Connection closed by 10.0.0.1 port 52502 Oct 30 13:22:50.958855 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 30 13:22:50.967001 kubelet[1716]: E1030 13:22:50.966958 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:22:50.971831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:22:50.972029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:22:50.972478 systemd[1]: kubelet.service: Consumed 1.673s CPU time, 265.3M memory peak. Oct 30 13:22:50.972947 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:52502.service: Deactivated successfully. Oct 30 13:22:50.977140 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 13:22:50.978618 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Oct 30 13:22:50.982722 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:52516.service - OpenSSH per-connection server daemon (10.0.0.1:52516). Oct 30 13:22:50.983500 systemd-logind[1582]: Removed session 4. Oct 30 13:22:51.056022 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 52516 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:51.057314 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:51.061741 systemd-logind[1582]: New session 5 of user core. Oct 30 13:22:51.071243 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 13:22:51.092928 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 13:22:51.093254 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:22:51.106964 sudo[1768]: pam_unix(sudo:session): session closed for user root Oct 30 13:22:51.108893 sshd[1767]: Connection closed by 10.0.0.1 port 52516 Oct 30 13:22:51.109464 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Oct 30 13:22:51.122811 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:52516.service: Deactivated successfully. Oct 30 13:22:51.124977 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 13:22:51.125744 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Oct 30 13:22:51.128800 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:52522.service - OpenSSH per-connection server daemon (10.0.0.1:52522). Oct 30 13:22:51.129588 systemd-logind[1582]: Removed session 5. Oct 30 13:22:51.185611 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 52522 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:51.186869 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:51.190908 systemd-logind[1582]: New session 6 of user core. Oct 30 13:22:51.206247 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 13:22:51.220504 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 13:22:51.220814 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:22:51.227720 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 30 13:22:51.235793 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 13:22:51.236108 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:22:51.246471 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:22:51.292971 augenrules[1801]: No rules Oct 30 13:22:51.294686 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:22:51.294990 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:22:51.296152 sudo[1778]: pam_unix(sudo:session): session closed for user root Oct 30 13:22:51.297871 sshd[1777]: Connection closed by 10.0.0.1 port 52522 Oct 30 13:22:51.298148 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 30 13:22:51.307011 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:52522.service: Deactivated successfully. Oct 30 13:22:51.309020 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 13:22:51.309761 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Oct 30 13:22:51.312751 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:52524.service - OpenSSH per-connection server daemon (10.0.0.1:52524). Oct 30 13:22:51.313391 systemd-logind[1582]: Removed session 6. Oct 30 13:22:51.383268 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 52524 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:22:51.385140 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:22:51.390716 systemd-logind[1582]: New session 7 of user core. Oct 30 13:22:51.400305 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 13:22:51.415060 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 13:22:51.415396 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:22:51.871077 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 13:22:51.884497 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 13:22:52.943782 dockerd[1834]: time="2025-10-30T13:22:52.943693621Z" level=info msg="Starting up" Oct 30 13:22:52.944478 dockerd[1834]: time="2025-10-30T13:22:52.944448065Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 13:22:52.961299 dockerd[1834]: time="2025-10-30T13:22:52.961238487Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 13:22:53.534751 dockerd[1834]: time="2025-10-30T13:22:53.534681756Z" level=info msg="Loading containers: start." Oct 30 13:22:53.548153 kernel: Initializing XFRM netlink socket Oct 30 13:22:53.817483 systemd-networkd[1505]: docker0: Link UP Oct 30 13:22:53.822275 dockerd[1834]: time="2025-10-30T13:22:53.822229135Z" level=info msg="Loading containers: done." Oct 30 13:22:53.844224 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck666135633-merged.mount: Deactivated successfully. Oct 30 13:22:53.845830 dockerd[1834]: time="2025-10-30T13:22:53.845777227Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 13:22:53.845906 dockerd[1834]: time="2025-10-30T13:22:53.845867788Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 13:22:53.845961 dockerd[1834]: time="2025-10-30T13:22:53.845943480Z" level=info msg="Initializing buildkit" Oct 30 13:22:53.876132 dockerd[1834]: time="2025-10-30T13:22:53.876071248Z" level=info msg="Completed buildkit initialization" Oct 30 13:22:53.883335 dockerd[1834]: time="2025-10-30T13:22:53.883298073Z" level=info msg="Daemon has completed initialization" Oct 30 13:22:53.883465 dockerd[1834]: time="2025-10-30T13:22:53.883353989Z" level=info msg="API listen on /run/docker.sock" Oct 30 13:22:53.883886 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 13:22:54.714221 containerd[1604]: time="2025-10-30T13:22:54.714147142Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 30 13:22:56.151043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294880985.mount: Deactivated successfully. Oct 30 13:22:57.593634 containerd[1604]: time="2025-10-30T13:22:57.593532193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:22:57.594360 containerd[1604]: time="2025-10-30T13:22:57.594059915Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 30 13:22:57.595531 containerd[1604]: time="2025-10-30T13:22:57.595485218Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:22:57.599027 containerd[1604]: time="2025-10-30T13:22:57.598965986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:22:57.599891 containerd[1604]: time="2025-10-30T13:22:57.599843694Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.885625172s" Oct 30 13:22:57.599891 containerd[1604]: time="2025-10-30T13:22:57.599893240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 30 13:22:57.601647 containerd[1604]: time="2025-10-30T13:22:57.601614245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 30 13:23:00.335877 containerd[1604]: time="2025-10-30T13:23:00.335749701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:00.336661 containerd[1604]: time="2025-10-30T13:23:00.336416076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 30 13:23:00.337829 containerd[1604]: time="2025-10-30T13:23:00.337789235Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:00.340791 containerd[1604]: time="2025-10-30T13:23:00.340762350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:00.342067 containerd[1604]: time="2025-10-30T13:23:00.342013861Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.74036008s" Oct 30 13:23:00.342154 containerd[1604]: time="2025-10-30T13:23:00.342078319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 30 13:23:00.343229 containerd[1604]: time="2025-10-30T13:23:00.343155607Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 30 13:23:01.223300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 13:23:01.227027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:23:01.792311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:23:01.801506 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:23:02.051364 containerd[1604]: time="2025-10-30T13:23:02.051223460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:02.052167 containerd[1604]: time="2025-10-30T13:23:02.052141355Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 30 13:23:02.053393 containerd[1604]: time="2025-10-30T13:23:02.053363616Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:02.056885 containerd[1604]: time="2025-10-30T13:23:02.056823778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:02.057772 containerd[1604]: time="2025-10-30T13:23:02.057737251Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.714549261s" Oct 30 13:23:02.057772 containerd[1604]: time="2025-10-30T13:23:02.057771854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 30 13:23:02.059739 containerd[1604]: time="2025-10-30T13:23:02.059711520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 13:23:02.077961 kubelet[2128]: E1030 13:23:02.077884 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:23:02.084764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:23:02.084971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:23:02.085504 systemd[1]: kubelet.service: Consumed 755ms CPU time, 110.9M memory peak. Oct 30 13:23:03.256421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465144476.mount: Deactivated successfully. Oct 30 13:23:04.081491 containerd[1604]: time="2025-10-30T13:23:04.081412171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:04.082263 containerd[1604]: time="2025-10-30T13:23:04.082229879Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 30 13:23:04.083421 containerd[1604]: time="2025-10-30T13:23:04.083383324Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:04.085583 containerd[1604]: time="2025-10-30T13:23:04.085530137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:04.086049 containerd[1604]: time="2025-10-30T13:23:04.086009466Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.026270542s" Oct 30 13:23:04.086087 containerd[1604]: time="2025-10-30T13:23:04.086050203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 30 13:23:04.086865 containerd[1604]: time="2025-10-30T13:23:04.086620813Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 13:23:04.640398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860543926.mount: Deactivated successfully. Oct 30 13:23:05.895955 containerd[1604]: time="2025-10-30T13:23:05.895871547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:05.896503 containerd[1604]: time="2025-10-30T13:23:05.896470314Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 30 13:23:05.897692 containerd[1604]: time="2025-10-30T13:23:05.897643631Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:05.900106 containerd[1604]: time="2025-10-30T13:23:05.900069265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:05.900996 containerd[1604]: time="2025-10-30T13:23:05.900909595Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.814252094s" Oct 30 13:23:05.900996 containerd[1604]: time="2025-10-30T13:23:05.900972233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 30 13:23:05.901524 containerd[1604]: time="2025-10-30T13:23:05.901502506Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 13:23:06.513101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952167560.mount: Deactivated successfully. Oct 30 13:23:06.519359 containerd[1604]: time="2025-10-30T13:23:06.519322635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:23:06.521997 containerd[1604]: time="2025-10-30T13:23:06.521976760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 13:23:06.523152 containerd[1604]: time="2025-10-30T13:23:06.523103449Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:23:06.525109 containerd[1604]: time="2025-10-30T13:23:06.525077946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:23:06.525695 containerd[1604]: time="2025-10-30T13:23:06.525668734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 624.13936ms" Oct 30 13:23:06.525744 containerd[1604]: time="2025-10-30T13:23:06.525695387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 13:23:06.526200 containerd[1604]: time="2025-10-30T13:23:06.526177561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 13:23:07.130724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301871841.mount: Deactivated successfully. Oct 30 13:23:09.112493 containerd[1604]: time="2025-10-30T13:23:09.112422844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:09.113248 containerd[1604]: time="2025-10-30T13:23:09.113164012Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 30 13:23:09.114392 containerd[1604]: time="2025-10-30T13:23:09.114356490Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:09.116895 containerd[1604]: time="2025-10-30T13:23:09.116853809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:09.117776 containerd[1604]: time="2025-10-30T13:23:09.117746551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.591542893s" Oct 30 13:23:09.117824 containerd[1604]: time="2025-10-30T13:23:09.117777883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 30 13:23:11.269355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:23:11.269531 systemd[1]: kubelet.service: Consumed 755ms CPU time, 110.9M memory peak. Oct 30 13:23:11.272013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:23:11.300026 systemd[1]: Reload requested from client PID 2285 ('systemctl') (unit session-7.scope)... Oct 30 13:23:11.300057 systemd[1]: Reloading... Oct 30 13:23:11.442229 zram_generator::config[2328]: No configuration found. Oct 30 13:23:11.846210 systemd[1]: Reloading finished in 545 ms. Oct 30 13:23:11.927866 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 13:23:11.927969 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 13:23:11.928412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:23:11.928462 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.2M memory peak. Oct 30 13:23:11.930356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:23:12.116368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:23:12.125489 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:23:12.286635 kubelet[2376]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:23:12.286635 kubelet[2376]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:23:12.286635 kubelet[2376]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:23:12.287489 kubelet[2376]: I1030 13:23:12.286748 2376 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:23:12.692167 kubelet[2376]: I1030 13:23:12.691682 2376 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 13:23:12.692167 kubelet[2376]: I1030 13:23:12.691752 2376 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:23:12.694536 kubelet[2376]: I1030 13:23:12.694506 2376 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 13:23:12.729673 kubelet[2376]: E1030 13:23:12.729591 2376 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:12.730859 kubelet[2376]: I1030 13:23:12.730825 2376 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:23:12.740858 kubelet[2376]: I1030 13:23:12.740810 2376 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:23:12.747744 kubelet[2376]: I1030 13:23:12.747706 2376 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 13:23:12.748990 kubelet[2376]: I1030 13:23:12.748930 2376 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:23:12.749231 kubelet[2376]: I1030 13:23:12.748975 2376 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:23:12.749424 kubelet[2376]: I1030 13:23:12.749246 2376 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:23:12.749424 kubelet[2376]: I1030 13:23:12.749257 2376 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 13:23:12.749486 kubelet[2376]: I1030 13:23:12.749451 2376 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:23:12.752269 kubelet[2376]: I1030 13:23:12.752239 2376 kubelet.go:446] "Attempting to sync node with API server" Oct 30 13:23:12.754561 kubelet[2376]: I1030 13:23:12.754513 2376 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:23:12.754561 kubelet[2376]: I1030 13:23:12.754568 2376 kubelet.go:352] "Adding apiserver pod source" Oct 30 13:23:12.754749 kubelet[2376]: I1030 13:23:12.754592 2376 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:23:12.755919 kubelet[2376]: W1030 13:23:12.755810 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:12.755919 kubelet[2376]: E1030 13:23:12.755888 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:12.756246 kubelet[2376]: W1030 13:23:12.756183 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:12.756297 kubelet[2376]: E1030 13:23:12.756260 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:12.758318 kubelet[2376]: I1030 13:23:12.757613 2376 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:23:12.758318 kubelet[2376]: I1030 13:23:12.758139 2376 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 13:23:12.758841 kubelet[2376]: W1030 13:23:12.758822 2376 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 13:23:12.761386 kubelet[2376]: I1030 13:23:12.761345 2376 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 13:23:12.761452 kubelet[2376]: I1030 13:23:12.761406 2376 server.go:1287] "Started kubelet" Oct 30 13:23:12.765145 kubelet[2376]: I1030 13:23:12.765026 2376 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:23:12.765880 kubelet[2376]: I1030 13:23:12.765861 2376 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:23:12.765997 kubelet[2376]: I1030 13:23:12.765926 2376 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:23:12.766236 kubelet[2376]: I1030 13:23:12.766200 2376 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:23:12.767762 kubelet[2376]: E1030 13:23:12.766254 2376 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18734796ebf529c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 13:23:12.761375172 +0000 UTC m=+0.529553246,LastTimestamp:2025-10-30 13:23:12.761375172 +0000 UTC m=+0.529553246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 13:23:12.768387 kubelet[2376]: I1030 13:23:12.768194 2376 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:23:12.768387 kubelet[2376]: E1030 13:23:12.768325 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:12.768458 kubelet[2376]: E1030 13:23:12.768419 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Oct 30 13:23:12.768497 kubelet[2376]: I1030 13:23:12.768473 2376 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 13:23:12.768608 kubelet[2376]: I1030 13:23:12.768576 2376 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 13:23:12.768742 kubelet[2376]: I1030 13:23:12.768715 2376 reconciler.go:26] "Reconciler: start to sync state" Oct 30 13:23:12.769063 kubelet[2376]: W1030 13:23:12.769013 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:12.769112 kubelet[2376]: E1030 13:23:12.769070 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:12.769244 kubelet[2376]: I1030 13:23:12.769218 2376 server.go:479] "Adding debug handlers to kubelet server" Oct 30 13:23:12.769340 kubelet[2376]: I1030 13:23:12.769309 2376 factory.go:221] Registration of the systemd container factory successfully Oct 30 13:23:12.769446 kubelet[2376]: I1030 13:23:12.769418 2376 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:23:12.770672 kubelet[2376]: E1030 13:23:12.770645 2376 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:23:12.770672 kubelet[2376]: I1030 13:23:12.770671 2376 factory.go:221] Registration of the containerd container factory successfully Oct 30 13:23:12.786723 kubelet[2376]: I1030 13:23:12.786643 2376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 13:23:12.788057 kubelet[2376]: I1030 13:23:12.788020 2376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 13:23:12.788231 kubelet[2376]: I1030 13:23:12.788075 2376 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 13:23:12.788505 kubelet[2376]: I1030 13:23:12.788463 2376 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:23:12.788505 kubelet[2376]: I1030 13:23:12.788484 2376 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 13:23:12.788576 kubelet[2376]: E1030 13:23:12.788541 2376 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:23:12.790653 kubelet[2376]: W1030 13:23:12.790590 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:12.790702 kubelet[2376]: E1030 13:23:12.790660 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:12.791389 kubelet[2376]: I1030 13:23:12.791358 2376 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:23:12.791389 kubelet[2376]: I1030 13:23:12.791380 2376 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:23:12.791464 kubelet[2376]: I1030 13:23:12.791402 2376 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:23:12.869157 kubelet[2376]: E1030 13:23:12.869063 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:12.889473 kubelet[2376]: E1030 13:23:12.889398 2376 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 13:23:12.969359 kubelet[2376]: E1030 13:23:12.969195 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:12.969359 kubelet[2376]: E1030 13:23:12.969196 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Oct 30 13:23:13.053691 kubelet[2376]: I1030 13:23:13.053630 2376 policy_none.go:49] "None policy: Start" Oct 30 13:23:13.053691 kubelet[2376]: I1030 13:23:13.053695 2376 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 13:23:13.053870 kubelet[2376]: I1030 13:23:13.053723 2376 state_mem.go:35] "Initializing new in-memory state store" Oct 30 13:23:13.062453 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 13:23:13.069659 kubelet[2376]: E1030 13:23:13.069627 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:13.073513 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 13:23:13.089548 kubelet[2376]: E1030 13:23:13.089513 2376 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 13:23:13.096629 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 13:23:13.098258 kubelet[2376]: I1030 13:23:13.098228 2376 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 13:23:13.098488 kubelet[2376]: I1030 13:23:13.098465 2376 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:23:13.098570 kubelet[2376]: I1030 13:23:13.098486 2376 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:23:13.098768 kubelet[2376]: I1030 13:23:13.098742 2376 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:23:13.099833 kubelet[2376]: E1030 13:23:13.099791 2376 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:23:13.099874 kubelet[2376]: E1030 13:23:13.099866 2376 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 13:23:13.200292 kubelet[2376]: I1030 13:23:13.200258 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:23:13.200809 kubelet[2376]: E1030 13:23:13.200758 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Oct 30 13:23:13.370845 kubelet[2376]: E1030 13:23:13.370712 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Oct 30 13:23:13.403352 kubelet[2376]: I1030 13:23:13.403295 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:23:13.403772 kubelet[2376]: E1030 13:23:13.403726 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Oct 30 13:23:13.501109 systemd[1]: Created slice kubepods-burstable-pod06e27ad4158e8843f59660c738eff40d.slice - libcontainer container kubepods-burstable-pod06e27ad4158e8843f59660c738eff40d.slice. Oct 30 13:23:13.513197 kubelet[2376]: E1030 13:23:13.513142 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:13.515694 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 30 13:23:13.524531 kubelet[2376]: E1030 13:23:13.524491 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:13.527587 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 30 13:23:13.530392 kubelet[2376]: E1030 13:23:13.530362 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:13.571895 kubelet[2376]: I1030 13:23:13.571846 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:13.571895 kubelet[2376]: I1030 13:23:13.571880 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:13.571992 kubelet[2376]: I1030 13:23:13.571919 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:13.571992 kubelet[2376]: I1030 13:23:13.571948 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:13.571992 kubelet[2376]: I1030 13:23:13.571971 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e27ad4158e8843f59660c738eff40d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06e27ad4158e8843f59660c738eff40d\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:13.572072 kubelet[2376]: I1030 13:23:13.571991 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e27ad4158e8843f59660c738eff40d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e27ad4158e8843f59660c738eff40d\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:13.572072 kubelet[2376]: I1030 13:23:13.572010 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e27ad4158e8843f59660c738eff40d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e27ad4158e8843f59660c738eff40d\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:13.572072 kubelet[2376]: I1030 13:23:13.572031 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:13.572072 kubelet[2376]: I1030 13:23:13.572049 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:13.663682 kubelet[2376]: W1030 13:23:13.663508 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:13.663682 kubelet[2376]: E1030 13:23:13.663617 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:13.805965 kubelet[2376]: I1030 13:23:13.805901 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:23:13.806506 kubelet[2376]: E1030 13:23:13.806456 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Oct 30 13:23:13.813724 kubelet[2376]: E1030 13:23:13.813670 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:13.814565 containerd[1604]: time="2025-10-30T13:23:13.814506450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06e27ad4158e8843f59660c738eff40d,Namespace:kube-system,Attempt:0,}" Oct 30 13:23:13.825691 kubelet[2376]: E1030 13:23:13.825658 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:13.826107 containerd[1604]: time="2025-10-30T13:23:13.826060363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 30 13:23:13.831370 kubelet[2376]: E1030 13:23:13.831337 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:13.831715 containerd[1604]: time="2025-10-30T13:23:13.831678227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 30 13:23:14.105753 kubelet[2376]: W1030 13:23:14.105558 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:14.105753 kubelet[2376]: E1030 13:23:14.105654 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:14.171554 kubelet[2376]: E1030 13:23:14.171518 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Oct 30 13:23:14.244566 kubelet[2376]: W1030 13:23:14.244465 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:14.244566 kubelet[2376]: E1030 13:23:14.244546 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:14.348613 kubelet[2376]: W1030 13:23:14.348478 2376 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Oct 30 13:23:14.348613 kubelet[2376]: E1030 13:23:14.348575 2376 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:14.361236 containerd[1604]: time="2025-10-30T13:23:14.360316974Z" level=info msg="connecting to shim 6f6924fe3ca78c69994fe91bd25352f3bc7b00a68ff024b71ecbcf576149f000" address="unix:///run/containerd/s/28bdb1553e03f66988d93f3a86c2bb37b0ceaad1e78809d38b994213c0568c18" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:14.365228 containerd[1604]: time="2025-10-30T13:23:14.365171555Z" level=info msg="connecting to shim 85ffb1b924521a148530604a93d23832b6aa8fba182f218a4f29c4d93e361af6" address="unix:///run/containerd/s/936a3c02596c3a58c0ecf9586a36975718869590d141fd962423647606b21ab4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:14.365379 containerd[1604]: time="2025-10-30T13:23:14.365288703Z" level=info msg="connecting to shim 86d9232c562829fd8a4923dc71cd81534175c7a7aecfe2fdea2066e957334687" address="unix:///run/containerd/s/bd7f2119c20608b2dad09f9d388971b573b36a49a77188fb5b8589b71fa0709a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:14.474355 systemd[1]: Started cri-containerd-6f6924fe3ca78c69994fe91bd25352f3bc7b00a68ff024b71ecbcf576149f000.scope - libcontainer container 6f6924fe3ca78c69994fe91bd25352f3bc7b00a68ff024b71ecbcf576149f000. Oct 30 13:23:14.476362 systemd[1]: Started cri-containerd-86d9232c562829fd8a4923dc71cd81534175c7a7aecfe2fdea2066e957334687.scope - libcontainer container 86d9232c562829fd8a4923dc71cd81534175c7a7aecfe2fdea2066e957334687. Oct 30 13:23:14.481939 systemd[1]: Started cri-containerd-85ffb1b924521a148530604a93d23832b6aa8fba182f218a4f29c4d93e361af6.scope - libcontainer container 85ffb1b924521a148530604a93d23832b6aa8fba182f218a4f29c4d93e361af6. Oct 30 13:23:14.550703 containerd[1604]: time="2025-10-30T13:23:14.550628622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"86d9232c562829fd8a4923dc71cd81534175c7a7aecfe2fdea2066e957334687\"" Oct 30 13:23:14.552723 kubelet[2376]: E1030 13:23:14.552694 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:14.554507 containerd[1604]: time="2025-10-30T13:23:14.554479821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06e27ad4158e8843f59660c738eff40d,Namespace:kube-system,Attempt:0,} returns sandbox id \"85ffb1b924521a148530604a93d23832b6aa8fba182f218a4f29c4d93e361af6\"" Oct 30 13:23:14.555113 containerd[1604]: time="2025-10-30T13:23:14.555087392Z" level=info msg="CreateContainer within sandbox \"86d9232c562829fd8a4923dc71cd81534175c7a7aecfe2fdea2066e957334687\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 13:23:14.555737 kubelet[2376]: E1030 13:23:14.555713 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:14.555838 containerd[1604]: time="2025-10-30T13:23:14.555791774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f6924fe3ca78c69994fe91bd25352f3bc7b00a68ff024b71ecbcf576149f000\"" Oct 30 13:23:14.556286 kubelet[2376]: E1030 13:23:14.556269 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:14.558008 containerd[1604]: time="2025-10-30T13:23:14.557257977Z" level=info msg="CreateContainer within sandbox \"85ffb1b924521a148530604a93d23832b6aa8fba182f218a4f29c4d93e361af6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 13:23:14.558008 containerd[1604]: time="2025-10-30T13:23:14.557382019Z" level=info msg="CreateContainer within sandbox \"6f6924fe3ca78c69994fe91bd25352f3bc7b00a68ff024b71ecbcf576149f000\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 13:23:14.566030 containerd[1604]: time="2025-10-30T13:23:14.565975476Z" level=info msg="Container e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:14.576364 containerd[1604]: time="2025-10-30T13:23:14.576314744Z" level=info msg="CreateContainer within sandbox \"86d9232c562829fd8a4923dc71cd81534175c7a7aecfe2fdea2066e957334687\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c\"" Oct 30 13:23:14.576956 containerd[1604]: time="2025-10-30T13:23:14.576924061Z" level=info msg="StartContainer for \"e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c\"" Oct 30 13:23:14.578055 containerd[1604]: time="2025-10-30T13:23:14.578030233Z" level=info msg="Container 287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:14.579906 containerd[1604]: time="2025-10-30T13:23:14.579475659Z" level=info msg="connecting to shim e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c" address="unix:///run/containerd/s/bd7f2119c20608b2dad09f9d388971b573b36a49a77188fb5b8589b71fa0709a" protocol=ttrpc version=3 Oct 30 13:23:14.584289 containerd[1604]: time="2025-10-30T13:23:14.584251187Z" level=info msg="Container 0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:14.590208 containerd[1604]: time="2025-10-30T13:23:14.590169389Z" level=info msg="CreateContainer within sandbox \"6f6924fe3ca78c69994fe91bd25352f3bc7b00a68ff024b71ecbcf576149f000\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe\"" Oct 30 13:23:14.590932 containerd[1604]: time="2025-10-30T13:23:14.590912028Z" level=info msg="StartContainer for \"287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe\"" Oct 30 13:23:14.591953 containerd[1604]: time="2025-10-30T13:23:14.591929065Z" level=info msg="connecting to shim 287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe" address="unix:///run/containerd/s/28bdb1553e03f66988d93f3a86c2bb37b0ceaad1e78809d38b994213c0568c18" protocol=ttrpc version=3 Oct 30 13:23:14.593089 containerd[1604]: time="2025-10-30T13:23:14.593049363Z" level=info msg="CreateContainer within sandbox \"85ffb1b924521a148530604a93d23832b6aa8fba182f218a4f29c4d93e361af6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733\"" Oct 30 13:23:14.593489 containerd[1604]: time="2025-10-30T13:23:14.593422960Z" level=info msg="StartContainer for \"0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733\"" Oct 30 13:23:14.595060 containerd[1604]: time="2025-10-30T13:23:14.594679861Z" level=info msg="connecting to shim 0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733" address="unix:///run/containerd/s/936a3c02596c3a58c0ecf9586a36975718869590d141fd962423647606b21ab4" protocol=ttrpc version=3 Oct 30 13:23:14.604280 systemd[1]: Started cri-containerd-e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c.scope - libcontainer container e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c. Oct 30 13:23:14.607855 kubelet[2376]: I1030 13:23:14.607817 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:23:14.608237 kubelet[2376]: E1030 13:23:14.608201 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Oct 30 13:23:14.712441 systemd[1]: Started cri-containerd-0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733.scope - libcontainer container 0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733. Oct 30 13:23:14.720287 systemd[1]: Started cri-containerd-287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe.scope - libcontainer container 287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe. Oct 30 13:23:14.790926 containerd[1604]: time="2025-10-30T13:23:14.790846040Z" level=info msg="StartContainer for \"e159746c5228779ad7af43be97382f1b174a22fe41c410e454f7c4cb311ec85c\" returns successfully" Oct 30 13:23:14.791810 containerd[1604]: time="2025-10-30T13:23:14.791373034Z" level=info msg="StartContainer for \"0edf25b8d214d0c146c43d92aa4ceca97997d15905bc8ec7f0f0868f38721733\" returns successfully" Oct 30 13:23:14.834950 kubelet[2376]: E1030 13:23:14.816295 2376 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:23:14.834950 kubelet[2376]: E1030 13:23:14.828251 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:14.834950 kubelet[2376]: E1030 13:23:14.828502 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:14.859239 containerd[1604]: time="2025-10-30T13:23:14.859109051Z" level=info msg="StartContainer for \"287739eb3b9ed2bf11ac77f67153b0b0bbff326404ff092aa68247b03aaa06fe\" returns successfully" Oct 30 13:23:14.863218 kubelet[2376]: E1030 13:23:14.863189 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:14.863618 kubelet[2376]: E1030 13:23:14.863555 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:15.867841 kubelet[2376]: E1030 13:23:15.867789 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:15.868413 kubelet[2376]: E1030 13:23:15.867947 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:15.868413 kubelet[2376]: E1030 13:23:15.867941 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:15.868413 kubelet[2376]: E1030 13:23:15.868071 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:15.868513 kubelet[2376]: E1030 13:23:15.868458 2376 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:23:15.868546 kubelet[2376]: E1030 13:23:15.868538 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:16.214234 kubelet[2376]: I1030 13:23:16.211854 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:23:16.321575 kubelet[2376]: E1030 13:23:16.321517 2376 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 13:23:16.331446 kubelet[2376]: I1030 13:23:16.331389 2376 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:23:16.331446 kubelet[2376]: E1030 13:23:16.331430 2376 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 13:23:16.345531 kubelet[2376]: E1030 13:23:16.345484 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:16.446445 kubelet[2376]: E1030 13:23:16.446382 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:16.547205 kubelet[2376]: E1030 13:23:16.547023 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:16.670270 kubelet[2376]: I1030 13:23:16.670181 2376 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:16.677136 kubelet[2376]: E1030 13:23:16.677075 2376 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:16.677136 kubelet[2376]: I1030 13:23:16.677112 2376 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:16.678893 kubelet[2376]: E1030 13:23:16.678858 2376 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:16.678893 kubelet[2376]: I1030 13:23:16.678877 2376 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:16.680806 kubelet[2376]: E1030 13:23:16.680763 2376 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:16.757681 kubelet[2376]: I1030 13:23:16.757632 2376 apiserver.go:52] "Watching apiserver" Oct 30 13:23:16.769334 kubelet[2376]: I1030 13:23:16.769286 2376 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 13:23:16.867767 kubelet[2376]: I1030 13:23:16.867671 2376 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:16.867856 kubelet[2376]: I1030 13:23:16.867821 2376 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:16.869666 kubelet[2376]: E1030 13:23:16.869459 2376 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:16.869666 kubelet[2376]: E1030 13:23:16.869607 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:16.870016 kubelet[2376]: E1030 13:23:16.869774 2376 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:16.870016 kubelet[2376]: E1030 13:23:16.869883 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:18.568659 systemd[1]: Reload requested from client PID 2657 ('systemctl') (unit session-7.scope)... Oct 30 13:23:18.568677 systemd[1]: Reloading... Oct 30 13:23:18.649168 zram_generator::config[2701]: No configuration found. Oct 30 13:23:18.885104 systemd[1]: Reloading finished in 316 ms. Oct 30 13:23:18.920183 kubelet[2376]: I1030 13:23:18.920099 2376 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:23:18.922336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:23:18.940465 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 13:23:18.940857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:23:18.940912 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 132M memory peak. Oct 30 13:23:18.942952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:23:19.145194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:23:19.158532 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:23:19.208024 kubelet[2746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:23:19.208024 kubelet[2746]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:23:19.208024 kubelet[2746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:23:19.208496 kubelet[2746]: I1030 13:23:19.208068 2746 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:23:19.214919 kubelet[2746]: I1030 13:23:19.214870 2746 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 13:23:19.214919 kubelet[2746]: I1030 13:23:19.214899 2746 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:23:19.215207 kubelet[2746]: I1030 13:23:19.215183 2746 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 13:23:19.216492 kubelet[2746]: I1030 13:23:19.216470 2746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 13:23:19.218653 kubelet[2746]: I1030 13:23:19.218627 2746 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:23:19.222315 kubelet[2746]: I1030 13:23:19.222293 2746 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:23:19.227489 kubelet[2746]: I1030 13:23:19.227465 2746 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 13:23:19.227727 kubelet[2746]: I1030 13:23:19.227685 2746 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:23:19.227991 kubelet[2746]: I1030 13:23:19.227727 2746 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:23:19.228214 kubelet[2746]: I1030 13:23:19.228028 2746 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:23:19.228214 kubelet[2746]: I1030 13:23:19.228051 2746 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 13:23:19.228214 kubelet[2746]: I1030 13:23:19.228132 2746 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:23:19.228321 kubelet[2746]: I1030 13:23:19.228303 2746 kubelet.go:446] "Attempting to sync node with API server" Oct 30 13:23:19.228353 kubelet[2746]: I1030 13:23:19.228332 2746 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:23:19.228720 kubelet[2746]: I1030 13:23:19.228692 2746 kubelet.go:352] "Adding apiserver pod source" Oct 30 13:23:19.228746 kubelet[2746]: I1030 13:23:19.228722 2746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:23:19.230112 kubelet[2746]: I1030 13:23:19.230076 2746 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:23:19.230477 kubelet[2746]: I1030 13:23:19.230460 2746 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 13:23:19.230972 kubelet[2746]: I1030 13:23:19.230936 2746 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 13:23:19.230972 kubelet[2746]: I1030 13:23:19.230973 2746 server.go:1287] "Started kubelet" Oct 30 13:23:19.231912 kubelet[2746]: I1030 13:23:19.231164 2746 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:23:19.231912 kubelet[2746]: I1030 13:23:19.231329 2746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:23:19.231912 kubelet[2746]: I1030 13:23:19.231661 2746 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:23:19.232520 kubelet[2746]: I1030 13:23:19.232485 2746 server.go:479] "Adding debug handlers to kubelet server" Oct 30 13:23:19.233130 kubelet[2746]: I1030 13:23:19.233099 2746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:23:19.233184 kubelet[2746]: I1030 13:23:19.233145 2746 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:23:19.234272 kubelet[2746]: I1030 13:23:19.234150 2746 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 13:23:19.235892 kubelet[2746]: I1030 13:23:19.235859 2746 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 13:23:19.236025 kubelet[2746]: I1030 13:23:19.236009 2746 reconciler.go:26] "Reconciler: start to sync state" Oct 30 13:23:19.241048 kubelet[2746]: E1030 13:23:19.241017 2746 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:23:19.242451 kubelet[2746]: E1030 13:23:19.242428 2746 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:23:19.244296 kubelet[2746]: I1030 13:23:19.244270 2746 factory.go:221] Registration of the systemd container factory successfully Oct 30 13:23:19.244409 kubelet[2746]: I1030 13:23:19.244387 2746 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:23:19.247203 kubelet[2746]: I1030 13:23:19.247168 2746 factory.go:221] Registration of the containerd container factory successfully Oct 30 13:23:19.253606 kubelet[2746]: I1030 13:23:19.252894 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 13:23:19.254270 kubelet[2746]: I1030 13:23:19.254246 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 13:23:19.254313 kubelet[2746]: I1030 13:23:19.254278 2746 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 13:23:19.254313 kubelet[2746]: I1030 13:23:19.254300 2746 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:23:19.254313 kubelet[2746]: I1030 13:23:19.254307 2746 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 13:23:19.254747 kubelet[2746]: E1030 13:23:19.254713 2746 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:23:19.285717 kubelet[2746]: I1030 13:23:19.285674 2746 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:23:19.285717 kubelet[2746]: I1030 13:23:19.285693 2746 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:23:19.285717 kubelet[2746]: I1030 13:23:19.285714 2746 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:23:19.285904 kubelet[2746]: I1030 13:23:19.285862 2746 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 13:23:19.285904 kubelet[2746]: I1030 13:23:19.285872 2746 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 13:23:19.285904 kubelet[2746]: I1030 13:23:19.285894 2746 policy_none.go:49] "None policy: Start" Oct 30 13:23:19.285904 kubelet[2746]: I1030 13:23:19.285903 2746 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 13:23:19.285983 kubelet[2746]: I1030 13:23:19.285913 2746 state_mem.go:35] "Initializing new in-memory state store" Oct 30 13:23:19.286029 kubelet[2746]: I1030 13:23:19.286007 2746 state_mem.go:75] "Updated machine memory state" Oct 30 13:23:19.290355 kubelet[2746]: I1030 13:23:19.290281 2746 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 13:23:19.290606 kubelet[2746]: I1030 13:23:19.290591 2746 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:23:19.290656 kubelet[2746]: I1030 13:23:19.290605 2746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:23:19.290809 kubelet[2746]: I1030 13:23:19.290788 2746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:23:19.292749 kubelet[2746]: E1030 13:23:19.292725 2746 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:23:19.355741 kubelet[2746]: I1030 13:23:19.355677 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:19.355878 kubelet[2746]: I1030 13:23:19.355817 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:19.355878 kubelet[2746]: I1030 13:23:19.355835 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:19.395524 kubelet[2746]: I1030 13:23:19.395420 2746 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:23:19.401093 kubelet[2746]: I1030 13:23:19.401052 2746 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 13:23:19.401315 kubelet[2746]: I1030 13:23:19.401163 2746 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:23:19.538084 kubelet[2746]: I1030 13:23:19.538022 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e27ad4158e8843f59660c738eff40d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e27ad4158e8843f59660c738eff40d\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:19.538084 kubelet[2746]: I1030 13:23:19.538057 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e27ad4158e8843f59660c738eff40d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e27ad4158e8843f59660c738eff40d\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:19.538084 kubelet[2746]: I1030 13:23:19.538080 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:19.538084 kubelet[2746]: I1030 13:23:19.538094 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:19.538423 kubelet[2746]: I1030 13:23:19.538111 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:19.538423 kubelet[2746]: I1030 13:23:19.538148 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:19.538423 kubelet[2746]: I1030 13:23:19.538166 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e27ad4158e8843f59660c738eff40d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06e27ad4158e8843f59660c738eff40d\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:19.538423 kubelet[2746]: I1030 13:23:19.538181 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:23:19.538423 kubelet[2746]: I1030 13:23:19.538196 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:23:19.661178 kubelet[2746]: E1030 13:23:19.660800 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:19.661178 kubelet[2746]: E1030 13:23:19.661027 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:19.661719 kubelet[2746]: E1030 13:23:19.661675 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:20.229970 kubelet[2746]: I1030 13:23:20.229905 2746 apiserver.go:52] "Watching apiserver" Oct 30 13:23:20.236315 kubelet[2746]: I1030 13:23:20.236284 2746 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 13:23:20.261348 kubelet[2746]: I1030 13:23:20.261287 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.261248544 podStartE2EDuration="1.261248544s" podCreationTimestamp="2025-10-30 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:23:20.260774964 +0000 UTC m=+1.095219219" watchObservedRunningTime="2025-10-30 13:23:20.261248544 +0000 UTC m=+1.095692799" Oct 30 13:23:20.266798 kubelet[2746]: I1030 13:23:20.266733 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.266712396 podStartE2EDuration="1.266712396s" podCreationTimestamp="2025-10-30 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:23:20.266711102 +0000 UTC m=+1.101155357" watchObservedRunningTime="2025-10-30 13:23:20.266712396 +0000 UTC m=+1.101156652" Oct 30 13:23:20.272800 kubelet[2746]: I1030 13:23:20.272753 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.272740928 podStartE2EDuration="1.272740928s" podCreationTimestamp="2025-10-30 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:23:20.272572499 +0000 UTC m=+1.107016754" watchObservedRunningTime="2025-10-30 13:23:20.272740928 +0000 UTC m=+1.107185183" Oct 30 13:23:20.274405 kubelet[2746]: E1030 13:23:20.274375 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:20.274530 kubelet[2746]: I1030 13:23:20.274516 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:20.274830 kubelet[2746]: E1030 13:23:20.274814 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:20.283936 kubelet[2746]: E1030 13:23:20.283409 2746 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 13:23:20.283936 kubelet[2746]: E1030 13:23:20.283558 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:21.276162 kubelet[2746]: E1030 13:23:21.276015 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:21.276162 kubelet[2746]: E1030 13:23:21.276096 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:23.400587 kubelet[2746]: I1030 13:23:23.400546 2746 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 13:23:23.401058 containerd[1604]: time="2025-10-30T13:23:23.401006227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 13:23:23.401367 kubelet[2746]: I1030 13:23:23.401201 2746 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 13:23:23.739538 kubelet[2746]: E1030 13:23:23.739401 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:24.066809 systemd[1]: Created slice kubepods-besteffort-pod6751ac83_076a_4c00_bf66_efa4b96c0742.slice - libcontainer container kubepods-besteffort-pod6751ac83_076a_4c00_bf66_efa4b96c0742.slice. Oct 30 13:23:24.069297 kubelet[2746]: I1030 13:23:24.068737 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6751ac83-076a-4c00-bf66-efa4b96c0742-kube-proxy\") pod \"kube-proxy-spt4q\" (UID: \"6751ac83-076a-4c00-bf66-efa4b96c0742\") " pod="kube-system/kube-proxy-spt4q" Oct 30 13:23:24.069297 kubelet[2746]: I1030 13:23:24.068765 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6751ac83-076a-4c00-bf66-efa4b96c0742-xtables-lock\") pod \"kube-proxy-spt4q\" (UID: \"6751ac83-076a-4c00-bf66-efa4b96c0742\") " pod="kube-system/kube-proxy-spt4q" Oct 30 13:23:24.069297 kubelet[2746]: I1030 13:23:24.068796 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm2xq\" (UniqueName: \"kubernetes.io/projected/6751ac83-076a-4c00-bf66-efa4b96c0742-kube-api-access-hm2xq\") pod \"kube-proxy-spt4q\" (UID: \"6751ac83-076a-4c00-bf66-efa4b96c0742\") " pod="kube-system/kube-proxy-spt4q" Oct 30 13:23:24.069297 kubelet[2746]: I1030 13:23:24.068815 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6751ac83-076a-4c00-bf66-efa4b96c0742-lib-modules\") pod \"kube-proxy-spt4q\" (UID: \"6751ac83-076a-4c00-bf66-efa4b96c0742\") " pod="kube-system/kube-proxy-spt4q" Oct 30 13:23:24.176033 kubelet[2746]: E1030 13:23:24.175973 2746 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 30 13:23:24.176033 kubelet[2746]: E1030 13:23:24.176017 2746 projected.go:194] Error preparing data for projected volume kube-api-access-hm2xq for pod kube-system/kube-proxy-spt4q: configmap "kube-root-ca.crt" not found Oct 30 13:23:24.176260 kubelet[2746]: E1030 13:23:24.176089 2746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6751ac83-076a-4c00-bf66-efa4b96c0742-kube-api-access-hm2xq podName:6751ac83-076a-4c00-bf66-efa4b96c0742 nodeName:}" failed. No retries permitted until 2025-10-30 13:23:24.676059892 +0000 UTC m=+5.510504147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hm2xq" (UniqueName: "kubernetes.io/projected/6751ac83-076a-4c00-bf66-efa4b96c0742-kube-api-access-hm2xq") pod "kube-proxy-spt4q" (UID: "6751ac83-076a-4c00-bf66-efa4b96c0742") : configmap "kube-root-ca.crt" not found Oct 30 13:23:24.528595 systemd[1]: Created slice kubepods-besteffort-pod4b9799a2_efe6_4ca0_b4e0_0e359e7f9675.slice - libcontainer container kubepods-besteffort-pod4b9799a2_efe6_4ca0_b4e0_0e359e7f9675.slice. Oct 30 13:23:24.572656 kubelet[2746]: I1030 13:23:24.572591 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82647\" (UniqueName: \"kubernetes.io/projected/4b9799a2-efe6-4ca0-b4e0-0e359e7f9675-kube-api-access-82647\") pod \"tigera-operator-7dcd859c48-xz2xz\" (UID: \"4b9799a2-efe6-4ca0-b4e0-0e359e7f9675\") " pod="tigera-operator/tigera-operator-7dcd859c48-xz2xz" Oct 30 13:23:24.573147 kubelet[2746]: I1030 13:23:24.572693 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b9799a2-efe6-4ca0-b4e0-0e359e7f9675-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xz2xz\" (UID: \"4b9799a2-efe6-4ca0-b4e0-0e359e7f9675\") " pod="tigera-operator/tigera-operator-7dcd859c48-xz2xz" Oct 30 13:23:24.833181 containerd[1604]: time="2025-10-30T13:23:24.833034263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xz2xz,Uid:4b9799a2-efe6-4ca0-b4e0-0e359e7f9675,Namespace:tigera-operator,Attempt:0,}" Oct 30 13:23:24.977138 kubelet[2746]: E1030 13:23:24.977073 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:24.977724 containerd[1604]: time="2025-10-30T13:23:24.977690876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spt4q,Uid:6751ac83-076a-4c00-bf66-efa4b96c0742,Namespace:kube-system,Attempt:0,}" Oct 30 13:23:25.017151 containerd[1604]: time="2025-10-30T13:23:25.016935879Z" level=info msg="connecting to shim faff50816d82d1f411ad698f5f774600bf5f4526acbe118c4746668ee92cf920" address="unix:///run/containerd/s/186fe5bb01cade9a9814e9430aa6ba0f0405cbae925b246e4130032958be8c8a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:25.018829 containerd[1604]: time="2025-10-30T13:23:25.018782546Z" level=info msg="connecting to shim e20eb37a3cff1ec0661e895c54d0d5195889eb0ca1c66be2ceb4bd0410865f4f" address="unix:///run/containerd/s/e8126d9aed8366386f61fbf29f195e583d93607033ea3fa72fcdd91899c782e8" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:25.044289 systemd[1]: Started cri-containerd-e20eb37a3cff1ec0661e895c54d0d5195889eb0ca1c66be2ceb4bd0410865f4f.scope - libcontainer container e20eb37a3cff1ec0661e895c54d0d5195889eb0ca1c66be2ceb4bd0410865f4f. Oct 30 13:23:25.052180 systemd[1]: Started cri-containerd-faff50816d82d1f411ad698f5f774600bf5f4526acbe118c4746668ee92cf920.scope - libcontainer container faff50816d82d1f411ad698f5f774600bf5f4526acbe118c4746668ee92cf920. Oct 30 13:23:25.112213 containerd[1604]: time="2025-10-30T13:23:25.112069234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spt4q,Uid:6751ac83-076a-4c00-bf66-efa4b96c0742,Namespace:kube-system,Attempt:0,} returns sandbox id \"faff50816d82d1f411ad698f5f774600bf5f4526acbe118c4746668ee92cf920\"" Oct 30 13:23:25.113097 kubelet[2746]: E1030 13:23:25.113061 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:25.115337 containerd[1604]: time="2025-10-30T13:23:25.115275511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xz2xz,Uid:4b9799a2-efe6-4ca0-b4e0-0e359e7f9675,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e20eb37a3cff1ec0661e895c54d0d5195889eb0ca1c66be2ceb4bd0410865f4f\"" Oct 30 13:23:25.116233 containerd[1604]: time="2025-10-30T13:23:25.116188558Z" level=info msg="CreateContainer within sandbox \"faff50816d82d1f411ad698f5f774600bf5f4526acbe118c4746668ee92cf920\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 13:23:25.117062 containerd[1604]: time="2025-10-30T13:23:25.116983460Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 13:23:25.128845 containerd[1604]: time="2025-10-30T13:23:25.128795843Z" level=info msg="Container f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:25.138184 containerd[1604]: time="2025-10-30T13:23:25.138146341Z" level=info msg="CreateContainer within sandbox \"faff50816d82d1f411ad698f5f774600bf5f4526acbe118c4746668ee92cf920\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262\"" Oct 30 13:23:25.138811 containerd[1604]: time="2025-10-30T13:23:25.138764378Z" level=info msg="StartContainer for \"f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262\"" Oct 30 13:23:25.140406 containerd[1604]: time="2025-10-30T13:23:25.140373181Z" level=info msg="connecting to shim f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262" address="unix:///run/containerd/s/186fe5bb01cade9a9814e9430aa6ba0f0405cbae925b246e4130032958be8c8a" protocol=ttrpc version=3 Oct 30 13:23:25.163257 systemd[1]: Started cri-containerd-f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262.scope - libcontainer container f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262. Oct 30 13:23:25.209062 containerd[1604]: time="2025-10-30T13:23:25.209007669Z" level=info msg="StartContainer for \"f919ce745df5125923708a277513184f0490b4088a2dc5c0589e609737a60262\" returns successfully" Oct 30 13:23:25.285150 kubelet[2746]: E1030 13:23:25.285076 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:26.630039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700612662.mount: Deactivated successfully. Oct 30 13:23:26.966508 containerd[1604]: time="2025-10-30T13:23:26.966380069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:26.967347 containerd[1604]: time="2025-10-30T13:23:26.967307703Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 13:23:26.968719 containerd[1604]: time="2025-10-30T13:23:26.968685799Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:26.970609 containerd[1604]: time="2025-10-30T13:23:26.970574212Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:26.973137 containerd[1604]: time="2025-10-30T13:23:26.972832572Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.855799745s" Oct 30 13:23:26.973137 containerd[1604]: time="2025-10-30T13:23:26.972867671Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 13:23:26.975969 containerd[1604]: time="2025-10-30T13:23:26.975935585Z" level=info msg="CreateContainer within sandbox \"e20eb37a3cff1ec0661e895c54d0d5195889eb0ca1c66be2ceb4bd0410865f4f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 13:23:26.982804 containerd[1604]: time="2025-10-30T13:23:26.982757615Z" level=info msg="Container 9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:26.987084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307379487.mount: Deactivated successfully. Oct 30 13:23:26.991650 containerd[1604]: time="2025-10-30T13:23:26.991616915Z" level=info msg="CreateContainer within sandbox \"e20eb37a3cff1ec0661e895c54d0d5195889eb0ca1c66be2ceb4bd0410865f4f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178\"" Oct 30 13:23:26.993111 containerd[1604]: time="2025-10-30T13:23:26.992104552Z" level=info msg="StartContainer for \"9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178\"" Oct 30 13:23:26.993111 containerd[1604]: time="2025-10-30T13:23:26.992831346Z" level=info msg="connecting to shim 9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178" address="unix:///run/containerd/s/e8126d9aed8366386f61fbf29f195e583d93607033ea3fa72fcdd91899c782e8" protocol=ttrpc version=3 Oct 30 13:23:27.015541 systemd[1]: Started cri-containerd-9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178.scope - libcontainer container 9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178. Oct 30 13:23:27.049272 containerd[1604]: time="2025-10-30T13:23:27.049214041Z" level=info msg="StartContainer for \"9f3246a384090d4398ea6c228b914306dec530e09cf6d862a4af12c11ef27178\" returns successfully" Oct 30 13:23:27.300475 kubelet[2746]: I1030 13:23:27.300314 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-spt4q" podStartSLOduration=3.300293323 podStartE2EDuration="3.300293323s" podCreationTimestamp="2025-10-30 13:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:23:25.29632598 +0000 UTC m=+6.130770235" watchObservedRunningTime="2025-10-30 13:23:27.300293323 +0000 UTC m=+8.134737578" Oct 30 13:23:30.100967 kubelet[2746]: E1030 13:23:30.100921 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:30.110058 kubelet[2746]: I1030 13:23:30.109980 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xz2xz" podStartSLOduration=4.2516574 podStartE2EDuration="6.109958282s" podCreationTimestamp="2025-10-30 13:23:24 +0000 UTC" firstStartedPulling="2025-10-30 13:23:25.116312037 +0000 UTC m=+5.950756292" lastFinishedPulling="2025-10-30 13:23:26.974612929 +0000 UTC m=+7.809057174" observedRunningTime="2025-10-30 13:23:27.300506003 +0000 UTC m=+8.134950248" watchObservedRunningTime="2025-10-30 13:23:30.109958282 +0000 UTC m=+10.944402557" Oct 30 13:23:30.270342 kubelet[2746]: E1030 13:23:30.270213 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:30.300080 kubelet[2746]: E1030 13:23:30.300047 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:32.681251 sudo[1814]: pam_unix(sudo:session): session closed for user root Oct 30 13:23:32.685152 sshd[1813]: Connection closed by 10.0.0.1 port 52524 Oct 30 13:23:32.684464 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Oct 30 13:23:32.698537 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Oct 30 13:23:32.702933 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:52524.service: Deactivated successfully. Oct 30 13:23:32.706708 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 13:23:32.707586 systemd[1]: session-7.scope: Consumed 4.589s CPU time, 219.9M memory peak. Oct 30 13:23:32.710243 systemd-logind[1582]: Removed session 7. Oct 30 13:23:33.348684 update_engine[1587]: I20251030 13:23:33.348555 1587 update_attempter.cc:509] Updating boot flags... Oct 30 13:23:33.746460 kubelet[2746]: E1030 13:23:33.744589 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:36.802389 systemd[1]: Created slice kubepods-besteffort-pod9f0140d9_69dc_4760_adbc_20fe6618be14.slice - libcontainer container kubepods-besteffort-pod9f0140d9_69dc_4760_adbc_20fe6618be14.slice. Oct 30 13:23:36.851482 kubelet[2746]: I1030 13:23:36.851405 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f0140d9-69dc-4760-adbc-20fe6618be14-tigera-ca-bundle\") pod \"calico-typha-764465896d-mbv4z\" (UID: \"9f0140d9-69dc-4760-adbc-20fe6618be14\") " pod="calico-system/calico-typha-764465896d-mbv4z" Oct 30 13:23:36.851482 kubelet[2746]: I1030 13:23:36.851480 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xh8\" (UniqueName: \"kubernetes.io/projected/9f0140d9-69dc-4760-adbc-20fe6618be14-kube-api-access-g8xh8\") pod \"calico-typha-764465896d-mbv4z\" (UID: \"9f0140d9-69dc-4760-adbc-20fe6618be14\") " pod="calico-system/calico-typha-764465896d-mbv4z" Oct 30 13:23:36.852035 kubelet[2746]: I1030 13:23:36.851501 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f0140d9-69dc-4760-adbc-20fe6618be14-typha-certs\") pod \"calico-typha-764465896d-mbv4z\" (UID: \"9f0140d9-69dc-4760-adbc-20fe6618be14\") " pod="calico-system/calico-typha-764465896d-mbv4z" Oct 30 13:23:36.995315 systemd[1]: Created slice kubepods-besteffort-podf91d18f9_55b8_49c4_9b50_8e2f57f2e5cd.slice - libcontainer container kubepods-besteffort-podf91d18f9_55b8_49c4_9b50_8e2f57f2e5cd.slice. Oct 30 13:23:37.053103 kubelet[2746]: I1030 13:23:37.052963 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-flexvol-driver-host\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053103 kubelet[2746]: I1030 13:23:37.053004 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-xtables-lock\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053103 kubelet[2746]: I1030 13:23:37.053022 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-policysync\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053103 kubelet[2746]: I1030 13:23:37.053036 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-tigera-ca-bundle\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053103 kubelet[2746]: I1030 13:23:37.053058 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-var-lib-calico\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053393 kubelet[2746]: I1030 13:23:37.053073 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbxh\" (UniqueName: \"kubernetes.io/projected/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-kube-api-access-bfbxh\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053393 kubelet[2746]: I1030 13:23:37.053088 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-cni-bin-dir\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053393 kubelet[2746]: I1030 13:23:37.053103 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-node-certs\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053393 kubelet[2746]: I1030 13:23:37.053149 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-cni-log-dir\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053393 kubelet[2746]: I1030 13:23:37.053165 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-cni-net-dir\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053532 kubelet[2746]: I1030 13:23:37.053240 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-lib-modules\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.053532 kubelet[2746]: I1030 13:23:37.053303 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd-var-run-calico\") pod \"calico-node-lb8m4\" (UID: \"f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd\") " pod="calico-system/calico-node-lb8m4" Oct 30 13:23:37.112733 kubelet[2746]: E1030 13:23:37.112691 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:37.113361 containerd[1604]: time="2025-10-30T13:23:37.113306652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764465896d-mbv4z,Uid:9f0140d9-69dc-4760-adbc-20fe6618be14,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:37.192368 kubelet[2746]: E1030 13:23:37.192316 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.192368 kubelet[2746]: W1030 13:23:37.192352 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.192546 kubelet[2746]: E1030 13:23:37.192408 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.212377 kubelet[2746]: E1030 13:23:37.212233 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:23:37.234820 containerd[1604]: time="2025-10-30T13:23:37.234732446Z" level=info msg="connecting to shim c97d20de0118c0c1a379a2882d54dd4f431455c19dbc7386d57057b1a7ecbfc8" address="unix:///run/containerd/s/7dc3440bc500f2455d92a9d41ca39298cbaa32455a02c764ac001fca5b74482b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:37.244141 kubelet[2746]: E1030 13:23:37.243978 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.244141 kubelet[2746]: W1030 13:23:37.244055 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.244141 kubelet[2746]: E1030 13:23:37.244081 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.244541 kubelet[2746]: E1030 13:23:37.244512 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.244541 kubelet[2746]: W1030 13:23:37.244526 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.244541 kubelet[2746]: E1030 13:23:37.244536 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.244793 kubelet[2746]: E1030 13:23:37.244769 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.244793 kubelet[2746]: W1030 13:23:37.244783 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.244793 kubelet[2746]: E1030 13:23:37.244792 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.245173 kubelet[2746]: E1030 13:23:37.245153 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.245173 kubelet[2746]: W1030 13:23:37.245166 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.245228 kubelet[2746]: E1030 13:23:37.245178 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.245446 kubelet[2746]: E1030 13:23:37.245420 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.245446 kubelet[2746]: W1030 13:23:37.245433 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.245446 kubelet[2746]: E1030 13:23:37.245442 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.245630 kubelet[2746]: E1030 13:23:37.245614 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.245630 kubelet[2746]: W1030 13:23:37.245626 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.245703 kubelet[2746]: E1030 13:23:37.245634 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.245848 kubelet[2746]: E1030 13:23:37.245831 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.245848 kubelet[2746]: W1030 13:23:37.245843 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.245911 kubelet[2746]: E1030 13:23:37.245852 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.246110 kubelet[2746]: E1030 13:23:37.246093 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.246110 kubelet[2746]: W1030 13:23:37.246106 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.246191 kubelet[2746]: E1030 13:23:37.246130 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.246358 kubelet[2746]: E1030 13:23:37.246341 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.246358 kubelet[2746]: W1030 13:23:37.246353 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.246405 kubelet[2746]: E1030 13:23:37.246362 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.246546 kubelet[2746]: E1030 13:23:37.246529 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.246546 kubelet[2746]: W1030 13:23:37.246541 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.246605 kubelet[2746]: E1030 13:23:37.246550 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.246758 kubelet[2746]: E1030 13:23:37.246740 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.246758 kubelet[2746]: W1030 13:23:37.246752 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.246823 kubelet[2746]: E1030 13:23:37.246763 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.246957 kubelet[2746]: E1030 13:23:37.246940 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.246957 kubelet[2746]: W1030 13:23:37.246952 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.247020 kubelet[2746]: E1030 13:23:37.246961 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.247192 kubelet[2746]: E1030 13:23:37.247174 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.247192 kubelet[2746]: W1030 13:23:37.247187 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.247192 kubelet[2746]: E1030 13:23:37.247197 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.247391 kubelet[2746]: E1030 13:23:37.247374 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.247493 kubelet[2746]: W1030 13:23:37.247459 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.247493 kubelet[2746]: E1030 13:23:37.247473 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.247820 kubelet[2746]: E1030 13:23:37.247792 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.247820 kubelet[2746]: W1030 13:23:37.247805 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.247820 kubelet[2746]: E1030 13:23:37.247815 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.248171 kubelet[2746]: E1030 13:23:37.248152 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.248171 kubelet[2746]: W1030 13:23:37.248166 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.248238 kubelet[2746]: E1030 13:23:37.248177 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.248526 kubelet[2746]: E1030 13:23:37.248509 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.248526 kubelet[2746]: W1030 13:23:37.248521 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.248588 kubelet[2746]: E1030 13:23:37.248531 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.248905 kubelet[2746]: E1030 13:23:37.248839 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.248942 kubelet[2746]: W1030 13:23:37.248919 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.248942 kubelet[2746]: E1030 13:23:37.248931 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.249349 kubelet[2746]: E1030 13:23:37.249322 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.249349 kubelet[2746]: W1030 13:23:37.249339 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.249349 kubelet[2746]: E1030 13:23:37.249350 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.249750 kubelet[2746]: E1030 13:23:37.249725 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.249750 kubelet[2746]: W1030 13:23:37.249739 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.249830 kubelet[2746]: E1030 13:23:37.249749 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.255589 kubelet[2746]: E1030 13:23:37.255566 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.255825 kubelet[2746]: W1030 13:23:37.255659 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.255825 kubelet[2746]: E1030 13:23:37.255696 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.255825 kubelet[2746]: I1030 13:23:37.255728 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/174a8f7f-c864-44be-b45c-d548b2df28c8-varrun\") pod \"csi-node-driver-2t6tn\" (UID: \"174a8f7f-c864-44be-b45c-d548b2df28c8\") " pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:37.255969 kubelet[2746]: E1030 13:23:37.255956 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.256035 kubelet[2746]: W1030 13:23:37.256022 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.256105 kubelet[2746]: E1030 13:23:37.256093 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.256301 kubelet[2746]: I1030 13:23:37.256285 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xptf6\" (UniqueName: \"kubernetes.io/projected/174a8f7f-c864-44be-b45c-d548b2df28c8-kube-api-access-xptf6\") pod \"csi-node-driver-2t6tn\" (UID: \"174a8f7f-c864-44be-b45c-d548b2df28c8\") " pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:37.256464 kubelet[2746]: E1030 13:23:37.256386 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.256464 kubelet[2746]: W1030 13:23:37.256429 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.256464 kubelet[2746]: E1030 13:23:37.256439 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.256877 kubelet[2746]: E1030 13:23:37.256746 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.256877 kubelet[2746]: W1030 13:23:37.256757 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.256877 kubelet[2746]: E1030 13:23:37.256771 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.257025 kubelet[2746]: E1030 13:23:37.257013 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.257085 kubelet[2746]: W1030 13:23:37.257072 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.257188 kubelet[2746]: E1030 13:23:37.257175 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.257421 kubelet[2746]: E1030 13:23:37.257407 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.257598 kubelet[2746]: W1030 13:23:37.257479 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.257598 kubelet[2746]: E1030 13:23:37.257497 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.257598 kubelet[2746]: I1030 13:23:37.257514 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/174a8f7f-c864-44be-b45c-d548b2df28c8-socket-dir\") pod \"csi-node-driver-2t6tn\" (UID: \"174a8f7f-c864-44be-b45c-d548b2df28c8\") " pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:37.257758 kubelet[2746]: E1030 13:23:37.257744 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.257849 kubelet[2746]: W1030 13:23:37.257808 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.257849 kubelet[2746]: E1030 13:23:37.257823 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.258099 kubelet[2746]: E1030 13:23:37.258087 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.258169 kubelet[2746]: W1030 13:23:37.258157 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.258226 kubelet[2746]: E1030 13:23:37.258215 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.258509 kubelet[2746]: I1030 13:23:37.258478 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/174a8f7f-c864-44be-b45c-d548b2df28c8-kubelet-dir\") pod \"csi-node-driver-2t6tn\" (UID: \"174a8f7f-c864-44be-b45c-d548b2df28c8\") " pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:37.258696 kubelet[2746]: E1030 13:23:37.258622 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.258696 kubelet[2746]: W1030 13:23:37.258644 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.258696 kubelet[2746]: E1030 13:23:37.258673 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.259025 kubelet[2746]: E1030 13:23:37.258874 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.259025 kubelet[2746]: W1030 13:23:37.258887 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.259025 kubelet[2746]: E1030 13:23:37.258900 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.259110 kubelet[2746]: E1030 13:23:37.259073 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.259110 kubelet[2746]: W1030 13:23:37.259082 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.259110 kubelet[2746]: E1030 13:23:37.259090 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.259202 kubelet[2746]: I1030 13:23:37.259133 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/174a8f7f-c864-44be-b45c-d548b2df28c8-registration-dir\") pod \"csi-node-driver-2t6tn\" (UID: \"174a8f7f-c864-44be-b45c-d548b2df28c8\") " pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:37.259346 kubelet[2746]: E1030 13:23:37.259321 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.259346 kubelet[2746]: W1030 13:23:37.259338 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.259412 kubelet[2746]: E1030 13:23:37.259347 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.259537 kubelet[2746]: E1030 13:23:37.259510 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.259537 kubelet[2746]: W1030 13:23:37.259523 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.259537 kubelet[2746]: E1030 13:23:37.259531 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.259714 kubelet[2746]: E1030 13:23:37.259692 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.259714 kubelet[2746]: W1030 13:23:37.259707 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.259714 kubelet[2746]: E1030 13:23:37.259715 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.259880 kubelet[2746]: E1030 13:23:37.259861 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.259880 kubelet[2746]: W1030 13:23:37.259874 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.259935 kubelet[2746]: E1030 13:23:37.259882 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.262264 systemd[1]: Started cri-containerd-c97d20de0118c0c1a379a2882d54dd4f431455c19dbc7386d57057b1a7ecbfc8.scope - libcontainer container c97d20de0118c0c1a379a2882d54dd4f431455c19dbc7386d57057b1a7ecbfc8. Oct 30 13:23:37.299029 kubelet[2746]: E1030 13:23:37.298991 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:37.299808 containerd[1604]: time="2025-10-30T13:23:37.299760080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lb8m4,Uid:f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:37.359879 kubelet[2746]: E1030 13:23:37.359784 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.359879 kubelet[2746]: W1030 13:23:37.359803 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.359879 kubelet[2746]: E1030 13:23:37.359822 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.360071 kubelet[2746]: E1030 13:23:37.360024 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.360071 kubelet[2746]: W1030 13:23:37.360032 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.360071 kubelet[2746]: E1030 13:23:37.360041 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.360283 kubelet[2746]: E1030 13:23:37.360263 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.360283 kubelet[2746]: W1030 13:23:37.360277 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.360361 kubelet[2746]: E1030 13:23:37.360289 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.360589 kubelet[2746]: E1030 13:23:37.360560 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.360629 kubelet[2746]: W1030 13:23:37.360586 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.360629 kubelet[2746]: E1030 13:23:37.360623 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.361022 kubelet[2746]: E1030 13:23:37.360989 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.361022 kubelet[2746]: W1030 13:23:37.361004 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.361022 kubelet[2746]: E1030 13:23:37.361022 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.361326 kubelet[2746]: E1030 13:23:37.361253 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.361326 kubelet[2746]: W1030 13:23:37.361262 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.361326 kubelet[2746]: E1030 13:23:37.361297 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.361519 kubelet[2746]: E1030 13:23:37.361493 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.361519 kubelet[2746]: W1030 13:23:37.361502 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.361613 kubelet[2746]: E1030 13:23:37.361529 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.361738 kubelet[2746]: E1030 13:23:37.361698 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.361738 kubelet[2746]: W1030 13:23:37.361721 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.361908 kubelet[2746]: E1030 13:23:37.361854 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.361937 kubelet[2746]: E1030 13:23:37.361909 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.361937 kubelet[2746]: W1030 13:23:37.361918 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.361984 kubelet[2746]: E1030 13:23:37.361967 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.362235 kubelet[2746]: E1030 13:23:37.362218 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.362235 kubelet[2746]: W1030 13:23:37.362230 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.362430 kubelet[2746]: E1030 13:23:37.362412 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.362638 kubelet[2746]: W1030 13:23:37.362620 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.362675 kubelet[2746]: E1030 13:23:37.362636 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.362675 kubelet[2746]: E1030 13:23:37.362402 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.362857 kubelet[2746]: E1030 13:23:37.362840 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.362857 kubelet[2746]: W1030 13:23:37.362853 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.362910 kubelet[2746]: E1030 13:23:37.362870 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.363216 kubelet[2746]: E1030 13:23:37.363193 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.363252 kubelet[2746]: W1030 13:23:37.363213 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.363252 kubelet[2746]: E1030 13:23:37.363239 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.363437 kubelet[2746]: E1030 13:23:37.363419 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.363437 kubelet[2746]: W1030 13:23:37.363430 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.363492 kubelet[2746]: E1030 13:23:37.363457 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.363629 kubelet[2746]: E1030 13:23:37.363613 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.363629 kubelet[2746]: W1030 13:23:37.363624 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.363697 kubelet[2746]: E1030 13:23:37.363649 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.363833 kubelet[2746]: E1030 13:23:37.363816 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.363833 kubelet[2746]: W1030 13:23:37.363827 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.363882 kubelet[2746]: E1030 13:23:37.363857 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.364011 kubelet[2746]: E1030 13:23:37.363995 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.364011 kubelet[2746]: W1030 13:23:37.364008 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.364057 kubelet[2746]: E1030 13:23:37.364033 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.364248 kubelet[2746]: E1030 13:23:37.364233 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.364248 kubelet[2746]: W1030 13:23:37.364244 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.364307 kubelet[2746]: E1030 13:23:37.364258 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.364463 kubelet[2746]: E1030 13:23:37.364445 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.364463 kubelet[2746]: W1030 13:23:37.364459 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.364512 kubelet[2746]: E1030 13:23:37.364474 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.364747 kubelet[2746]: E1030 13:23:37.364725 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.364747 kubelet[2746]: W1030 13:23:37.364738 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.364838 kubelet[2746]: E1030 13:23:37.364756 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.364994 kubelet[2746]: E1030 13:23:37.364967 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.364994 kubelet[2746]: W1030 13:23:37.364980 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.365067 kubelet[2746]: E1030 13:23:37.364997 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.365248 kubelet[2746]: E1030 13:23:37.365230 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.365248 kubelet[2746]: W1030 13:23:37.365243 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.365325 kubelet[2746]: E1030 13:23:37.365277 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.365447 kubelet[2746]: E1030 13:23:37.365429 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.365447 kubelet[2746]: W1030 13:23:37.365441 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.365549 kubelet[2746]: E1030 13:23:37.365535 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.365742 kubelet[2746]: E1030 13:23:37.365722 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.365742 kubelet[2746]: W1030 13:23:37.365736 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.365869 kubelet[2746]: E1030 13:23:37.365764 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.365995 kubelet[2746]: E1030 13:23:37.365976 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.365995 kubelet[2746]: W1030 13:23:37.365990 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.366045 kubelet[2746]: E1030 13:23:37.366003 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.490381 containerd[1604]: time="2025-10-30T13:23:37.490311950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764465896d-mbv4z,Uid:9f0140d9-69dc-4760-adbc-20fe6618be14,Namespace:calico-system,Attempt:0,} returns sandbox id \"c97d20de0118c0c1a379a2882d54dd4f431455c19dbc7386d57057b1a7ecbfc8\"" Oct 30 13:23:37.491262 kubelet[2746]: E1030 13:23:37.491241 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:37.492090 containerd[1604]: time="2025-10-30T13:23:37.492033939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 13:23:37.541671 kubelet[2746]: E1030 13:23:37.541632 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:37.541671 kubelet[2746]: W1030 13:23:37.541655 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:37.541846 kubelet[2746]: E1030 13:23:37.541677 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:37.630234 containerd[1604]: time="2025-10-30T13:23:37.629312626Z" level=info msg="connecting to shim 3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94" address="unix:///run/containerd/s/484060912a87f002932d251a61daff68d26988c09944e2bc68411e44cb549b00" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:37.686286 systemd[1]: Started cri-containerd-3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94.scope - libcontainer container 3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94. Oct 30 13:23:37.742322 containerd[1604]: time="2025-10-30T13:23:37.742262926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lb8m4,Uid:f91d18f9-55b8-49c4-9b50-8e2f57f2e5cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\"" Oct 30 13:23:37.742998 kubelet[2746]: E1030 13:23:37.742973 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:39.134355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195344841.mount: Deactivated successfully. Oct 30 13:23:39.256557 kubelet[2746]: E1030 13:23:39.256487 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:23:39.589139 containerd[1604]: time="2025-10-30T13:23:39.589063719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:39.589860 containerd[1604]: time="2025-10-30T13:23:39.589828533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 13:23:39.591012 containerd[1604]: time="2025-10-30T13:23:39.590957523Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:39.592985 containerd[1604]: time="2025-10-30T13:23:39.592951363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:39.593468 containerd[1604]: time="2025-10-30T13:23:39.593422792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.101350048s" Oct 30 13:23:39.593505 containerd[1604]: time="2025-10-30T13:23:39.593467068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 13:23:39.594354 containerd[1604]: time="2025-10-30T13:23:39.594289295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 13:23:39.606703 containerd[1604]: time="2025-10-30T13:23:39.606668072Z" level=info msg="CreateContainer within sandbox \"c97d20de0118c0c1a379a2882d54dd4f431455c19dbc7386d57057b1a7ecbfc8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 13:23:39.614207 containerd[1604]: time="2025-10-30T13:23:39.614155477Z" level=info msg="Container e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:39.621746 containerd[1604]: time="2025-10-30T13:23:39.621693882Z" level=info msg="CreateContainer within sandbox \"c97d20de0118c0c1a379a2882d54dd4f431455c19dbc7386d57057b1a7ecbfc8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411\"" Oct 30 13:23:39.622621 containerd[1604]: time="2025-10-30T13:23:39.622391278Z" level=info msg="StartContainer for \"e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411\"" Oct 30 13:23:39.623478 containerd[1604]: time="2025-10-30T13:23:39.623438743Z" level=info msg="connecting to shim e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411" address="unix:///run/containerd/s/7dc3440bc500f2455d92a9d41ca39298cbaa32455a02c764ac001fca5b74482b" protocol=ttrpc version=3 Oct 30 13:23:39.650670 systemd[1]: Started cri-containerd-e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411.scope - libcontainer container e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411. Oct 30 13:23:39.711496 containerd[1604]: time="2025-10-30T13:23:39.711448370Z" level=info msg="StartContainer for \"e484b5ba8b54e2235400f9b96a6d7f2a4ced61df06e0318e0dfaa97e529b9411\" returns successfully" Oct 30 13:23:40.319403 kubelet[2746]: E1030 13:23:40.319348 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:40.329293 kubelet[2746]: I1030 13:23:40.329221 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-764465896d-mbv4z" podStartSLOduration=2.22685684 podStartE2EDuration="4.329205317s" podCreationTimestamp="2025-10-30 13:23:36 +0000 UTC" firstStartedPulling="2025-10-30 13:23:37.491780612 +0000 UTC m=+18.326224867" lastFinishedPulling="2025-10-30 13:23:39.594129089 +0000 UTC m=+20.428573344" observedRunningTime="2025-10-30 13:23:40.328976397 +0000 UTC m=+21.163420642" watchObservedRunningTime="2025-10-30 13:23:40.329205317 +0000 UTC m=+21.163649572" Oct 30 13:23:40.368185 kubelet[2746]: E1030 13:23:40.368144 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.368185 kubelet[2746]: W1030 13:23:40.368178 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.368387 kubelet[2746]: E1030 13:23:40.368205 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.368425 kubelet[2746]: E1030 13:23:40.368394 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.368425 kubelet[2746]: W1030 13:23:40.368403 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.368425 kubelet[2746]: E1030 13:23:40.368412 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.368610 kubelet[2746]: E1030 13:23:40.368589 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.368610 kubelet[2746]: W1030 13:23:40.368600 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.368610 kubelet[2746]: E1030 13:23:40.368609 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.368861 kubelet[2746]: E1030 13:23:40.368838 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.368861 kubelet[2746]: W1030 13:23:40.368850 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.368861 kubelet[2746]: E1030 13:23:40.368859 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.369055 kubelet[2746]: E1030 13:23:40.369034 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.369055 kubelet[2746]: W1030 13:23:40.369045 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.369055 kubelet[2746]: E1030 13:23:40.369054 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.369362 kubelet[2746]: E1030 13:23:40.369337 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.369362 kubelet[2746]: W1030 13:23:40.369352 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.369362 kubelet[2746]: E1030 13:23:40.369361 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.369553 kubelet[2746]: E1030 13:23:40.369538 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.369553 kubelet[2746]: W1030 13:23:40.369551 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.369604 kubelet[2746]: E1030 13:23:40.369560 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.369727 kubelet[2746]: E1030 13:23:40.369713 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.369727 kubelet[2746]: W1030 13:23:40.369723 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.369776 kubelet[2746]: E1030 13:23:40.369734 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.369912 kubelet[2746]: E1030 13:23:40.369899 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.369912 kubelet[2746]: W1030 13:23:40.369908 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.369961 kubelet[2746]: E1030 13:23:40.369916 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.370084 kubelet[2746]: E1030 13:23:40.370070 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.370084 kubelet[2746]: W1030 13:23:40.370080 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.370167 kubelet[2746]: E1030 13:23:40.370087 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.370278 kubelet[2746]: E1030 13:23:40.370264 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.370278 kubelet[2746]: W1030 13:23:40.370274 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.370332 kubelet[2746]: E1030 13:23:40.370282 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.370449 kubelet[2746]: E1030 13:23:40.370435 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.370449 kubelet[2746]: W1030 13:23:40.370445 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.370494 kubelet[2746]: E1030 13:23:40.370452 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.370637 kubelet[2746]: E1030 13:23:40.370623 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.370637 kubelet[2746]: W1030 13:23:40.370633 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.370694 kubelet[2746]: E1030 13:23:40.370641 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.370816 kubelet[2746]: E1030 13:23:40.370802 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.370816 kubelet[2746]: W1030 13:23:40.370812 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.370861 kubelet[2746]: E1030 13:23:40.370820 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.370989 kubelet[2746]: E1030 13:23:40.370975 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.370989 kubelet[2746]: W1030 13:23:40.370985 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.371041 kubelet[2746]: E1030 13:23:40.370993 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.383527 kubelet[2746]: E1030 13:23:40.383471 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.383527 kubelet[2746]: W1030 13:23:40.383494 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.383527 kubelet[2746]: E1030 13:23:40.383527 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.383826 kubelet[2746]: E1030 13:23:40.383796 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.383826 kubelet[2746]: W1030 13:23:40.383812 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.383891 kubelet[2746]: E1030 13:23:40.383830 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.384176 kubelet[2746]: E1030 13:23:40.384132 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.384176 kubelet[2746]: W1030 13:23:40.384165 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.384231 kubelet[2746]: E1030 13:23:40.384196 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.384425 kubelet[2746]: E1030 13:23:40.384400 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.384425 kubelet[2746]: W1030 13:23:40.384411 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.384425 kubelet[2746]: E1030 13:23:40.384425 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.384629 kubelet[2746]: E1030 13:23:40.384608 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.384629 kubelet[2746]: W1030 13:23:40.384616 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.384629 kubelet[2746]: E1030 13:23:40.384631 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.384865 kubelet[2746]: E1030 13:23:40.384838 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.384865 kubelet[2746]: W1030 13:23:40.384850 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.384865 kubelet[2746]: E1030 13:23:40.384863 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.385065 kubelet[2746]: E1030 13:23:40.385036 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.385065 kubelet[2746]: W1030 13:23:40.385053 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.385133 kubelet[2746]: E1030 13:23:40.385070 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.385290 kubelet[2746]: E1030 13:23:40.385267 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.385290 kubelet[2746]: W1030 13:23:40.385280 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.385352 kubelet[2746]: E1030 13:23:40.385304 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.385445 kubelet[2746]: E1030 13:23:40.385430 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.385445 kubelet[2746]: W1030 13:23:40.385440 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.385491 kubelet[2746]: E1030 13:23:40.385459 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.385611 kubelet[2746]: E1030 13:23:40.385597 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.385611 kubelet[2746]: W1030 13:23:40.385607 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.385660 kubelet[2746]: E1030 13:23:40.385619 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.385788 kubelet[2746]: E1030 13:23:40.385773 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.385788 kubelet[2746]: W1030 13:23:40.385784 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.385829 kubelet[2746]: E1030 13:23:40.385797 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.385947 kubelet[2746]: E1030 13:23:40.385933 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.385947 kubelet[2746]: W1030 13:23:40.385943 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.385994 kubelet[2746]: E1030 13:23:40.385956 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.386110 kubelet[2746]: E1030 13:23:40.386095 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.386110 kubelet[2746]: W1030 13:23:40.386106 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.386174 kubelet[2746]: E1030 13:23:40.386138 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.386310 kubelet[2746]: E1030 13:23:40.386291 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.386310 kubelet[2746]: W1030 13:23:40.386302 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.386358 kubelet[2746]: E1030 13:23:40.386314 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.386492 kubelet[2746]: E1030 13:23:40.386475 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.386492 kubelet[2746]: W1030 13:23:40.386485 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.386543 kubelet[2746]: E1030 13:23:40.386498 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.386729 kubelet[2746]: E1030 13:23:40.386710 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.386729 kubelet[2746]: W1030 13:23:40.386723 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.386778 kubelet[2746]: E1030 13:23:40.386732 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.386942 kubelet[2746]: E1030 13:23:40.386923 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.386942 kubelet[2746]: W1030 13:23:40.386936 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.386991 kubelet[2746]: E1030 13:23:40.386949 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:40.387115 kubelet[2746]: E1030 13:23:40.387098 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:23:40.387115 kubelet[2746]: W1030 13:23:40.387108 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:23:40.387182 kubelet[2746]: E1030 13:23:40.387133 2746 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:23:41.219076 containerd[1604]: time="2025-10-30T13:23:41.218990811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:41.219906 containerd[1604]: time="2025-10-30T13:23:41.219863908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 13:23:41.221157 containerd[1604]: time="2025-10-30T13:23:41.221102783Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:41.223396 containerd[1604]: time="2025-10-30T13:23:41.223366317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:41.223980 containerd[1604]: time="2025-10-30T13:23:41.223944567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.629623734s" Oct 30 13:23:41.224065 containerd[1604]: time="2025-10-30T13:23:41.223985895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 13:23:41.225994 containerd[1604]: time="2025-10-30T13:23:41.225961807Z" level=info msg="CreateContainer within sandbox \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 13:23:41.236302 containerd[1604]: time="2025-10-30T13:23:41.236136257Z" level=info msg="Container 3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:41.245987 containerd[1604]: time="2025-10-30T13:23:41.245928725Z" level=info msg="CreateContainer within sandbox \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\"" Oct 30 13:23:41.246589 containerd[1604]: time="2025-10-30T13:23:41.246545467Z" level=info msg="StartContainer for \"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\"" Oct 30 13:23:41.248873 containerd[1604]: time="2025-10-30T13:23:41.248836530Z" level=info msg="connecting to shim 3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96" address="unix:///run/containerd/s/484060912a87f002932d251a61daff68d26988c09944e2bc68411e44cb549b00" protocol=ttrpc version=3 Oct 30 13:23:41.256314 kubelet[2746]: E1030 13:23:41.256254 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:23:41.276264 systemd[1]: Started cri-containerd-3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96.scope - libcontainer container 3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96. Oct 30 13:23:41.323655 kubelet[2746]: I1030 13:23:41.323612 2746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:23:41.324323 kubelet[2746]: E1030 13:23:41.324053 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:41.366657 systemd[1]: cri-containerd-3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96.scope: Deactivated successfully. Oct 30 13:23:41.368535 containerd[1604]: time="2025-10-30T13:23:41.368488779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\" id:\"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\" pid:3448 exited_at:{seconds:1761830621 nanos:367944792}" Oct 30 13:23:41.372867 containerd[1604]: time="2025-10-30T13:23:41.372819329Z" level=info msg="received exit event container_id:\"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\" id:\"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\" pid:3448 exited_at:{seconds:1761830621 nanos:367944792}" Oct 30 13:23:41.374579 containerd[1604]: time="2025-10-30T13:23:41.374546524Z" level=info msg="StartContainer for \"3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96\" returns successfully" Oct 30 13:23:41.402668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3027325ceaeb6ec743f51b38687c22cb3fe626c908a622be28840d1f48158f96-rootfs.mount: Deactivated successfully. Oct 30 13:23:42.327242 kubelet[2746]: E1030 13:23:42.327205 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:42.327972 containerd[1604]: time="2025-10-30T13:23:42.327923429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 13:23:43.254755 kubelet[2746]: E1030 13:23:43.254690 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:23:45.012069 containerd[1604]: time="2025-10-30T13:23:45.012010128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:45.012810 containerd[1604]: time="2025-10-30T13:23:45.012774651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 13:23:45.013919 containerd[1604]: time="2025-10-30T13:23:45.013885991Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:45.016158 containerd[1604]: time="2025-10-30T13:23:45.016113110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:45.016787 containerd[1604]: time="2025-10-30T13:23:45.016756662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.688781324s" Oct 30 13:23:45.016820 containerd[1604]: time="2025-10-30T13:23:45.016784530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 13:23:45.018822 containerd[1604]: time="2025-10-30T13:23:45.018794951Z" level=info msg="CreateContainer within sandbox \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 13:23:45.029956 containerd[1604]: time="2025-10-30T13:23:45.029903853Z" level=info msg="Container 6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:45.039142 containerd[1604]: time="2025-10-30T13:23:45.038165040Z" level=info msg="CreateContainer within sandbox \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\"" Oct 30 13:23:45.041146 containerd[1604]: time="2025-10-30T13:23:45.041079172Z" level=info msg="StartContainer for \"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\"" Oct 30 13:23:45.042791 containerd[1604]: time="2025-10-30T13:23:45.042758057Z" level=info msg="connecting to shim 6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377" address="unix:///run/containerd/s/484060912a87f002932d251a61daff68d26988c09944e2bc68411e44cb549b00" protocol=ttrpc version=3 Oct 30 13:23:45.069271 systemd[1]: Started cri-containerd-6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377.scope - libcontainer container 6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377. Oct 30 13:23:45.119154 containerd[1604]: time="2025-10-30T13:23:45.119098006Z" level=info msg="StartContainer for \"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\" returns successfully" Oct 30 13:23:45.255424 kubelet[2746]: E1030 13:23:45.255245 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:23:45.336184 kubelet[2746]: E1030 13:23:45.336145 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:46.337646 kubelet[2746]: E1030 13:23:46.337606 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:46.417151 systemd[1]: cri-containerd-6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377.scope: Deactivated successfully. Oct 30 13:23:46.417984 containerd[1604]: time="2025-10-30T13:23:46.417929790Z" level=info msg="received exit event container_id:\"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\" id:\"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\" pid:3508 exited_at:{seconds:1761830626 nanos:417703082}" Oct 30 13:23:46.418432 containerd[1604]: time="2025-10-30T13:23:46.418007559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\" id:\"6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377\" pid:3508 exited_at:{seconds:1761830626 nanos:417703082}" Oct 30 13:23:46.418013 systemd[1]: cri-containerd-6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377.scope: Consumed 629ms CPU time, 178.6M memory peak, 3.9M read from disk, 171.3M written to disk. Oct 30 13:23:46.454419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a1e624c0b169abf686cee6f8c35396e02a343eab8b7a8b60963620abde5e377-rootfs.mount: Deactivated successfully. Oct 30 13:23:46.523156 kubelet[2746]: I1030 13:23:46.522516 2746 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 13:23:46.722362 systemd[1]: Created slice kubepods-burstable-pod413d6b3f_f010_4e44_b7d6_60a3ec02eda4.slice - libcontainer container kubepods-burstable-pod413d6b3f_f010_4e44_b7d6_60a3ec02eda4.slice. Oct 30 13:23:46.729544 systemd[1]: Created slice kubepods-besteffort-pod43bc02bf_5550_4f0b_901f_d8dafc5e7d95.slice - libcontainer container kubepods-besteffort-pod43bc02bf_5550_4f0b_901f_d8dafc5e7d95.slice. Oct 30 13:23:46.743073 systemd[1]: Created slice kubepods-besteffort-pod789594bd_b894_4820_937c_e5586bffb18c.slice - libcontainer container kubepods-besteffort-pod789594bd_b894_4820_937c_e5586bffb18c.slice. Oct 30 13:23:46.750375 systemd[1]: Created slice kubepods-besteffort-pod9d8a307a_f326_4a48_b1d2_dfbdf32a2608.slice - libcontainer container kubepods-besteffort-pod9d8a307a_f326_4a48_b1d2_dfbdf32a2608.slice. Oct 30 13:23:46.758412 systemd[1]: Created slice kubepods-besteffort-pode6887d28_f8bd_4d4f_b72b_6f4de4992ef6.slice - libcontainer container kubepods-besteffort-pode6887d28_f8bd_4d4f_b72b_6f4de4992ef6.slice. Oct 30 13:23:46.763463 systemd[1]: Created slice kubepods-burstable-podb3ef1af1_9e52_4beb_a533_c27d9b225aed.slice - libcontainer container kubepods-burstable-podb3ef1af1_9e52_4beb_a533_c27d9b225aed.slice. Oct 30 13:23:46.769908 systemd[1]: Created slice kubepods-besteffort-pod6ec85b99_8a3f_41cb_bb15_d4714acb86dc.slice - libcontainer container kubepods-besteffort-pod6ec85b99_8a3f_41cb_bb15_d4714acb86dc.slice. Oct 30 13:23:46.829610 kubelet[2746]: I1030 13:23:46.829543 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ec85b99-8a3f-41cb-bb15-d4714acb86dc-tigera-ca-bundle\") pod \"calico-kube-controllers-77b486d6f4-rp89s\" (UID: \"6ec85b99-8a3f-41cb-bb15-d4714acb86dc\") " pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" Oct 30 13:23:46.829610 kubelet[2746]: I1030 13:23:46.829590 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/413d6b3f-f010-4e44-b7d6-60a3ec02eda4-config-volume\") pod \"coredns-668d6bf9bc-m4gfm\" (UID: \"413d6b3f-f010-4e44-b7d6-60a3ec02eda4\") " pod="kube-system/coredns-668d6bf9bc-m4gfm" Oct 30 13:23:46.829610 kubelet[2746]: I1030 13:23:46.829611 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgjj\" (UniqueName: \"kubernetes.io/projected/413d6b3f-f010-4e44-b7d6-60a3ec02eda4-kube-api-access-rmgjj\") pod \"coredns-668d6bf9bc-m4gfm\" (UID: \"413d6b3f-f010-4e44-b7d6-60a3ec02eda4\") " pod="kube-system/coredns-668d6bf9bc-m4gfm" Oct 30 13:23:46.829824 kubelet[2746]: I1030 13:23:46.829628 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkpl8\" (UniqueName: \"kubernetes.io/projected/e6887d28-f8bd-4d4f-b72b-6f4de4992ef6-kube-api-access-rkpl8\") pod \"calico-apiserver-9656b5c49-956xq\" (UID: \"e6887d28-f8bd-4d4f-b72b-6f4de4992ef6\") " pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" Oct 30 13:23:46.829824 kubelet[2746]: I1030 13:23:46.829646 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8a307a-f326-4a48-b1d2-dfbdf32a2608-config\") pod \"goldmane-666569f655-jhg5h\" (UID: \"9d8a307a-f326-4a48-b1d2-dfbdf32a2608\") " pod="calico-system/goldmane-666569f655-jhg5h" Oct 30 13:23:46.829824 kubelet[2746]: I1030 13:23:46.829661 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nwng\" (UniqueName: \"kubernetes.io/projected/9d8a307a-f326-4a48-b1d2-dfbdf32a2608-kube-api-access-8nwng\") pod \"goldmane-666569f655-jhg5h\" (UID: \"9d8a307a-f326-4a48-b1d2-dfbdf32a2608\") " pod="calico-system/goldmane-666569f655-jhg5h" Oct 30 13:23:46.829824 kubelet[2746]: I1030 13:23:46.829701 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6f5\" (UniqueName: \"kubernetes.io/projected/6ec85b99-8a3f-41cb-bb15-d4714acb86dc-kube-api-access-4b6f5\") pod \"calico-kube-controllers-77b486d6f4-rp89s\" (UID: \"6ec85b99-8a3f-41cb-bb15-d4714acb86dc\") " pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" Oct 30 13:23:46.829824 kubelet[2746]: I1030 13:23:46.829719 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cntxk\" (UniqueName: \"kubernetes.io/projected/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-kube-api-access-cntxk\") pod \"whisker-849cd785f4-tjp4q\" (UID: \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\") " pod="calico-system/whisker-849cd785f4-tjp4q" Oct 30 13:23:46.829965 kubelet[2746]: I1030 13:23:46.829746 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8a307a-f326-4a48-b1d2-dfbdf32a2608-goldmane-ca-bundle\") pod \"goldmane-666569f655-jhg5h\" (UID: \"9d8a307a-f326-4a48-b1d2-dfbdf32a2608\") " pod="calico-system/goldmane-666569f655-jhg5h" Oct 30 13:23:46.829965 kubelet[2746]: I1030 13:23:46.829766 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/789594bd-b894-4820-937c-e5586bffb18c-calico-apiserver-certs\") pod \"calico-apiserver-9656b5c49-8rh5p\" (UID: \"789594bd-b894-4820-937c-e5586bffb18c\") " pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" Oct 30 13:23:46.829965 kubelet[2746]: I1030 13:23:46.829895 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-ca-bundle\") pod \"whisker-849cd785f4-tjp4q\" (UID: \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\") " pod="calico-system/whisker-849cd785f4-tjp4q" Oct 30 13:23:46.829965 kubelet[2746]: I1030 13:23:46.829965 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d8a307a-f326-4a48-b1d2-dfbdf32a2608-goldmane-key-pair\") pod \"goldmane-666569f655-jhg5h\" (UID: \"9d8a307a-f326-4a48-b1d2-dfbdf32a2608\") " pod="calico-system/goldmane-666569f655-jhg5h" Oct 30 13:23:46.830074 kubelet[2746]: I1030 13:23:46.830009 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6887d28-f8bd-4d4f-b72b-6f4de4992ef6-calico-apiserver-certs\") pod \"calico-apiserver-9656b5c49-956xq\" (UID: \"e6887d28-f8bd-4d4f-b72b-6f4de4992ef6\") " pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" Oct 30 13:23:46.830074 kubelet[2746]: I1030 13:23:46.830030 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ef1af1-9e52-4beb-a533-c27d9b225aed-config-volume\") pod \"coredns-668d6bf9bc-frj2l\" (UID: \"b3ef1af1-9e52-4beb-a533-c27d9b225aed\") " pod="kube-system/coredns-668d6bf9bc-frj2l" Oct 30 13:23:46.830074 kubelet[2746]: I1030 13:23:46.830058 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptj8\" (UniqueName: \"kubernetes.io/projected/789594bd-b894-4820-937c-e5586bffb18c-kube-api-access-hptj8\") pod \"calico-apiserver-9656b5c49-8rh5p\" (UID: \"789594bd-b894-4820-937c-e5586bffb18c\") " pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" Oct 30 13:23:46.830183 kubelet[2746]: I1030 13:23:46.830147 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-backend-key-pair\") pod \"whisker-849cd785f4-tjp4q\" (UID: \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\") " pod="calico-system/whisker-849cd785f4-tjp4q" Oct 30 13:23:46.830208 kubelet[2746]: I1030 13:23:46.830189 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrm7\" (UniqueName: \"kubernetes.io/projected/b3ef1af1-9e52-4beb-a533-c27d9b225aed-kube-api-access-jsrm7\") pod \"coredns-668d6bf9bc-frj2l\" (UID: \"b3ef1af1-9e52-4beb-a533-c27d9b225aed\") " pod="kube-system/coredns-668d6bf9bc-frj2l" Oct 30 13:23:47.027326 kubelet[2746]: E1030 13:23:47.027277 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:47.027871 containerd[1604]: time="2025-10-30T13:23:47.027833375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m4gfm,Uid:413d6b3f-f010-4e44-b7d6-60a3ec02eda4,Namespace:kube-system,Attempt:0,}" Oct 30 13:23:47.038622 containerd[1604]: time="2025-10-30T13:23:47.038568601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849cd785f4-tjp4q,Uid:43bc02bf-5550-4f0b-901f-d8dafc5e7d95,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:47.048418 containerd[1604]: time="2025-10-30T13:23:47.048391410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-8rh5p,Uid:789594bd-b894-4820-937c-e5586bffb18c,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:23:47.063588 containerd[1604]: time="2025-10-30T13:23:47.063544055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-956xq,Uid:e6887d28-f8bd-4d4f-b72b-6f4de4992ef6,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:23:47.064434 containerd[1604]: time="2025-10-30T13:23:47.064216040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jhg5h,Uid:9d8a307a-f326-4a48-b1d2-dfbdf32a2608,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:47.068303 kubelet[2746]: E1030 13:23:47.068276 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:47.072534 containerd[1604]: time="2025-10-30T13:23:47.072469057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frj2l,Uid:b3ef1af1-9e52-4beb-a533-c27d9b225aed,Namespace:kube-system,Attempt:0,}" Oct 30 13:23:47.075052 containerd[1604]: time="2025-10-30T13:23:47.075017722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b486d6f4-rp89s,Uid:6ec85b99-8a3f-41cb-bb15-d4714acb86dc,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:47.203059 containerd[1604]: time="2025-10-30T13:23:47.202913186Z" level=error msg="Failed to destroy network for sandbox \"173f0eb3062cf03200fbf2d522f8473d7e9ca2f30005988dcac7196b314dac2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.208855 containerd[1604]: time="2025-10-30T13:23:47.208805893Z" level=error msg="Failed to destroy network for sandbox \"d88a285ef44779d3b27252799b50fba9bac0db7420722b51690948a31f89d4a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.213111 containerd[1604]: time="2025-10-30T13:23:47.212892856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m4gfm,Uid:413d6b3f-f010-4e44-b7d6-60a3ec02eda4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f0eb3062cf03200fbf2d522f8473d7e9ca2f30005988dcac7196b314dac2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.217881 containerd[1604]: time="2025-10-30T13:23:47.217802252Z" level=error msg="Failed to destroy network for sandbox \"1c296cdebe8ae482e8ec0c207d484244bcf0ebd7c09ed2d8cbecdf4a5c04f37c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.226868 containerd[1604]: time="2025-10-30T13:23:47.226806495Z" level=error msg="Failed to destroy network for sandbox \"3bea74ef4e80815d3285366a89c3e2b05cc211a69ad09a8135bbba268c9527fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.235288 containerd[1604]: time="2025-10-30T13:23:47.235246946Z" level=error msg="Failed to destroy network for sandbox \"141b9deacd867ca16468e6e3590f4a35d894d97a86f66595ac9a71f2cf833cad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.242090 containerd[1604]: time="2025-10-30T13:23:47.241961706Z" level=error msg="Failed to destroy network for sandbox \"d7a111e7b7672a20c8f03237e6fe10f05e6e0c7d4760518dbc4272df83c2fd29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.243311 containerd[1604]: time="2025-10-30T13:23:47.243256324Z" level=error msg="Failed to destroy network for sandbox \"5eef1a31be6fb39538e28e7f4e6ff15767c9f353f39ebb50ee891f74a43dc7b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.244560 kubelet[2746]: E1030 13:23:47.244492 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f0eb3062cf03200fbf2d522f8473d7e9ca2f30005988dcac7196b314dac2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.244804 kubelet[2746]: E1030 13:23:47.244577 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f0eb3062cf03200fbf2d522f8473d7e9ca2f30005988dcac7196b314dac2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m4gfm" Oct 30 13:23:47.244804 kubelet[2746]: E1030 13:23:47.244602 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f0eb3062cf03200fbf2d522f8473d7e9ca2f30005988dcac7196b314dac2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m4gfm" Oct 30 13:23:47.244804 kubelet[2746]: E1030 13:23:47.244658 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m4gfm_kube-system(413d6b3f-f010-4e44-b7d6-60a3ec02eda4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m4gfm_kube-system(413d6b3f-f010-4e44-b7d6-60a3ec02eda4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"173f0eb3062cf03200fbf2d522f8473d7e9ca2f30005988dcac7196b314dac2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m4gfm" podUID="413d6b3f-f010-4e44-b7d6-60a3ec02eda4" Oct 30 13:23:47.260816 systemd[1]: Created slice kubepods-besteffort-pod174a8f7f_c864_44be_b45c_d548b2df28c8.slice - libcontainer container kubepods-besteffort-pod174a8f7f_c864_44be_b45c_d548b2df28c8.slice. Oct 30 13:23:47.263053 containerd[1604]: time="2025-10-30T13:23:47.263018631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t6tn,Uid:174a8f7f-c864-44be-b45c-d548b2df28c8,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:47.309888 containerd[1604]: time="2025-10-30T13:23:47.309757887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849cd785f4-tjp4q,Uid:43bc02bf-5550-4f0b-901f-d8dafc5e7d95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a285ef44779d3b27252799b50fba9bac0db7420722b51690948a31f89d4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.310038 kubelet[2746]: E1030 13:23:47.309980 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a285ef44779d3b27252799b50fba9bac0db7420722b51690948a31f89d4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.310095 kubelet[2746]: E1030 13:23:47.310060 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a285ef44779d3b27252799b50fba9bac0db7420722b51690948a31f89d4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-849cd785f4-tjp4q" Oct 30 13:23:47.310152 kubelet[2746]: E1030 13:23:47.310102 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88a285ef44779d3b27252799b50fba9bac0db7420722b51690948a31f89d4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-849cd785f4-tjp4q" Oct 30 13:23:47.310442 kubelet[2746]: E1030 13:23:47.310187 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-849cd785f4-tjp4q_calico-system(43bc02bf-5550-4f0b-901f-d8dafc5e7d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-849cd785f4-tjp4q_calico-system(43bc02bf-5550-4f0b-901f-d8dafc5e7d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d88a285ef44779d3b27252799b50fba9bac0db7420722b51690948a31f89d4a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-849cd785f4-tjp4q" podUID="43bc02bf-5550-4f0b-901f-d8dafc5e7d95" Oct 30 13:23:47.320502 containerd[1604]: time="2025-10-30T13:23:47.320435104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b486d6f4-rp89s,Uid:6ec85b99-8a3f-41cb-bb15-d4714acb86dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c296cdebe8ae482e8ec0c207d484244bcf0ebd7c09ed2d8cbecdf4a5c04f37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.320694 kubelet[2746]: E1030 13:23:47.320654 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c296cdebe8ae482e8ec0c207d484244bcf0ebd7c09ed2d8cbecdf4a5c04f37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.320694 kubelet[2746]: E1030 13:23:47.320687 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c296cdebe8ae482e8ec0c207d484244bcf0ebd7c09ed2d8cbecdf4a5c04f37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" Oct 30 13:23:47.320893 kubelet[2746]: E1030 13:23:47.320702 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c296cdebe8ae482e8ec0c207d484244bcf0ebd7c09ed2d8cbecdf4a5c04f37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" Oct 30 13:23:47.320893 kubelet[2746]: E1030 13:23:47.320736 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77b486d6f4-rp89s_calico-system(6ec85b99-8a3f-41cb-bb15-d4714acb86dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77b486d6f4-rp89s_calico-system(6ec85b99-8a3f-41cb-bb15-d4714acb86dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c296cdebe8ae482e8ec0c207d484244bcf0ebd7c09ed2d8cbecdf4a5c04f37c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:23:47.322112 containerd[1604]: time="2025-10-30T13:23:47.322042232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jhg5h,Uid:9d8a307a-f326-4a48-b1d2-dfbdf32a2608,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bea74ef4e80815d3285366a89c3e2b05cc211a69ad09a8135bbba268c9527fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.322357 kubelet[2746]: E1030 13:23:47.322287 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bea74ef4e80815d3285366a89c3e2b05cc211a69ad09a8135bbba268c9527fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.322430 kubelet[2746]: E1030 13:23:47.322364 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bea74ef4e80815d3285366a89c3e2b05cc211a69ad09a8135bbba268c9527fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jhg5h" Oct 30 13:23:47.322430 kubelet[2746]: E1030 13:23:47.322384 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bea74ef4e80815d3285366a89c3e2b05cc211a69ad09a8135bbba268c9527fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jhg5h" Oct 30 13:23:47.322483 kubelet[2746]: E1030 13:23:47.322422 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jhg5h_calico-system(9d8a307a-f326-4a48-b1d2-dfbdf32a2608)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jhg5h_calico-system(9d8a307a-f326-4a48-b1d2-dfbdf32a2608)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bea74ef4e80815d3285366a89c3e2b05cc211a69ad09a8135bbba268c9527fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:23:47.323081 containerd[1604]: time="2025-10-30T13:23:47.323040275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-8rh5p,Uid:789594bd-b894-4820-937c-e5586bffb18c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"141b9deacd867ca16468e6e3590f4a35d894d97a86f66595ac9a71f2cf833cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.323259 kubelet[2746]: E1030 13:23:47.323228 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"141b9deacd867ca16468e6e3590f4a35d894d97a86f66595ac9a71f2cf833cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.323312 kubelet[2746]: E1030 13:23:47.323265 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"141b9deacd867ca16468e6e3590f4a35d894d97a86f66595ac9a71f2cf833cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" Oct 30 13:23:47.323312 kubelet[2746]: E1030 13:23:47.323288 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"141b9deacd867ca16468e6e3590f4a35d894d97a86f66595ac9a71f2cf833cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" Oct 30 13:23:47.323368 kubelet[2746]: E1030 13:23:47.323321 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9656b5c49-8rh5p_calico-apiserver(789594bd-b894-4820-937c-e5586bffb18c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9656b5c49-8rh5p_calico-apiserver(789594bd-b894-4820-937c-e5586bffb18c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"141b9deacd867ca16468e6e3590f4a35d894d97a86f66595ac9a71f2cf833cad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" podUID="789594bd-b894-4820-937c-e5586bffb18c" Oct 30 13:23:47.324171 containerd[1604]: time="2025-10-30T13:23:47.324108310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frj2l,Uid:b3ef1af1-9e52-4beb-a533-c27d9b225aed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a111e7b7672a20c8f03237e6fe10f05e6e0c7d4760518dbc4272df83c2fd29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.324691 kubelet[2746]: E1030 13:23:47.324246 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a111e7b7672a20c8f03237e6fe10f05e6e0c7d4760518dbc4272df83c2fd29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.324691 kubelet[2746]: E1030 13:23:47.324276 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a111e7b7672a20c8f03237e6fe10f05e6e0c7d4760518dbc4272df83c2fd29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-frj2l" Oct 30 13:23:47.324691 kubelet[2746]: E1030 13:23:47.324290 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a111e7b7672a20c8f03237e6fe10f05e6e0c7d4760518dbc4272df83c2fd29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-frj2l" Oct 30 13:23:47.324793 kubelet[2746]: E1030 13:23:47.324319 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-frj2l_kube-system(b3ef1af1-9e52-4beb-a533-c27d9b225aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-frj2l_kube-system(b3ef1af1-9e52-4beb-a533-c27d9b225aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7a111e7b7672a20c8f03237e6fe10f05e6e0c7d4760518dbc4272df83c2fd29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-frj2l" podUID="b3ef1af1-9e52-4beb-a533-c27d9b225aed" Oct 30 13:23:47.325203 containerd[1604]: time="2025-10-30T13:23:47.325162487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-956xq,Uid:e6887d28-f8bd-4d4f-b72b-6f4de4992ef6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eef1a31be6fb39538e28e7f4e6ff15767c9f353f39ebb50ee891f74a43dc7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.325363 kubelet[2746]: E1030 13:23:47.325299 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eef1a31be6fb39538e28e7f4e6ff15767c9f353f39ebb50ee891f74a43dc7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.325363 kubelet[2746]: E1030 13:23:47.325326 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eef1a31be6fb39538e28e7f4e6ff15767c9f353f39ebb50ee891f74a43dc7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" Oct 30 13:23:47.325363 kubelet[2746]: E1030 13:23:47.325341 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eef1a31be6fb39538e28e7f4e6ff15767c9f353f39ebb50ee891f74a43dc7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" Oct 30 13:23:47.325555 kubelet[2746]: E1030 13:23:47.325366 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9656b5c49-956xq_calico-apiserver(e6887d28-f8bd-4d4f-b72b-6f4de4992ef6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9656b5c49-956xq_calico-apiserver(e6887d28-f8bd-4d4f-b72b-6f4de4992ef6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eef1a31be6fb39538e28e7f4e6ff15767c9f353f39ebb50ee891f74a43dc7b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:23:47.341894 kubelet[2746]: E1030 13:23:47.341855 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:47.344186 containerd[1604]: time="2025-10-30T13:23:47.344153967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 13:23:47.380954 containerd[1604]: time="2025-10-30T13:23:47.380894764Z" level=error msg="Failed to destroy network for sandbox \"781f88b6aa6aecfac52ac80cd7f8e40742d1b8be2bee91c774d3f5cd8dcc1df0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.435148 containerd[1604]: time="2025-10-30T13:23:47.435083352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t6tn,Uid:174a8f7f-c864-44be-b45c-d548b2df28c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"781f88b6aa6aecfac52ac80cd7f8e40742d1b8be2bee91c774d3f5cd8dcc1df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.435559 kubelet[2746]: E1030 13:23:47.435349 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781f88b6aa6aecfac52ac80cd7f8e40742d1b8be2bee91c774d3f5cd8dcc1df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:23:47.435559 kubelet[2746]: E1030 13:23:47.435417 2746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781f88b6aa6aecfac52ac80cd7f8e40742d1b8be2bee91c774d3f5cd8dcc1df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:47.435559 kubelet[2746]: E1030 13:23:47.435439 2746 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781f88b6aa6aecfac52ac80cd7f8e40742d1b8be2bee91c774d3f5cd8dcc1df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t6tn" Oct 30 13:23:47.435676 kubelet[2746]: E1030 13:23:47.435483 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781f88b6aa6aecfac52ac80cd7f8e40742d1b8be2bee91c774d3f5cd8dcc1df0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:23:54.517082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999181639.mount: Deactivated successfully. Oct 30 13:23:55.345330 containerd[1604]: time="2025-10-30T13:23:55.345248408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:55.346272 containerd[1604]: time="2025-10-30T13:23:55.346248067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 13:23:55.347717 containerd[1604]: time="2025-10-30T13:23:55.347665251Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:55.349596 containerd[1604]: time="2025-10-30T13:23:55.349560529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:23:55.350048 containerd[1604]: time="2025-10-30T13:23:55.349995889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.005288139s" Oct 30 13:23:55.350086 containerd[1604]: time="2025-10-30T13:23:55.350047333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 13:23:55.361848 containerd[1604]: time="2025-10-30T13:23:55.361791230Z" level=info msg="CreateContainer within sandbox \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 13:23:55.382637 containerd[1604]: time="2025-10-30T13:23:55.382580170Z" level=info msg="Container 30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:23:55.393218 containerd[1604]: time="2025-10-30T13:23:55.393165166Z" level=info msg="CreateContainer within sandbox \"3556c5c0ab7ec38c1e599e5a632a4b0767a3bc16cebe1cce2b0c832709e99f94\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb\"" Oct 30 13:23:55.394168 containerd[1604]: time="2025-10-30T13:23:55.393681469Z" level=info msg="StartContainer for \"30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb\"" Oct 30 13:23:55.395373 containerd[1604]: time="2025-10-30T13:23:55.395340471Z" level=info msg="connecting to shim 30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb" address="unix:///run/containerd/s/484060912a87f002932d251a61daff68d26988c09944e2bc68411e44cb549b00" protocol=ttrpc version=3 Oct 30 13:23:55.430315 systemd[1]: Started cri-containerd-30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb.scope - libcontainer container 30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb. Oct 30 13:23:55.482511 containerd[1604]: time="2025-10-30T13:23:55.482456084Z" level=info msg="StartContainer for \"30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb\" returns successfully" Oct 30 13:23:55.564709 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 13:23:55.566268 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 13:23:55.988444 kubelet[2746]: I1030 13:23:55.988394 2746 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-ca-bundle\") pod \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\" (UID: \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\") " Oct 30 13:23:55.988444 kubelet[2746]: I1030 13:23:55.988446 2746 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cntxk\" (UniqueName: \"kubernetes.io/projected/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-kube-api-access-cntxk\") pod \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\" (UID: \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\") " Oct 30 13:23:55.988987 kubelet[2746]: I1030 13:23:55.988474 2746 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-backend-key-pair\") pod \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\" (UID: \"43bc02bf-5550-4f0b-901f-d8dafc5e7d95\") " Oct 30 13:23:55.992882 kubelet[2746]: I1030 13:23:55.992764 2746 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "43bc02bf-5550-4f0b-901f-d8dafc5e7d95" (UID: "43bc02bf-5550-4f0b-901f-d8dafc5e7d95"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 13:23:56.000155 kubelet[2746]: I1030 13:23:55.998220 2746 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "43bc02bf-5550-4f0b-901f-d8dafc5e7d95" (UID: "43bc02bf-5550-4f0b-901f-d8dafc5e7d95"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 13:23:56.000155 kubelet[2746]: I1030 13:23:55.998390 2746 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-kube-api-access-cntxk" (OuterVolumeSpecName: "kube-api-access-cntxk") pod "43bc02bf-5550-4f0b-901f-d8dafc5e7d95" (UID: "43bc02bf-5550-4f0b-901f-d8dafc5e7d95"). InnerVolumeSpecName "kube-api-access-cntxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 13:23:55.999412 systemd[1]: var-lib-kubelet-pods-43bc02bf\x2d5550\x2d4f0b\x2d901f\x2dd8dafc5e7d95-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 13:23:56.003973 systemd[1]: var-lib-kubelet-pods-43bc02bf\x2d5550\x2d4f0b\x2d901f\x2dd8dafc5e7d95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcntxk.mount: Deactivated successfully. Oct 30 13:23:56.089274 kubelet[2746]: I1030 13:23:56.089219 2746 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 13:23:56.089274 kubelet[2746]: I1030 13:23:56.089259 2746 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cntxk\" (UniqueName: \"kubernetes.io/projected/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-kube-api-access-cntxk\") on node \"localhost\" DevicePath \"\"" Oct 30 13:23:56.089274 kubelet[2746]: I1030 13:23:56.089268 2746 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43bc02bf-5550-4f0b-901f-d8dafc5e7d95-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 13:23:56.363813 kubelet[2746]: E1030 13:23:56.363531 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:23:56.381201 systemd[1]: Removed slice kubepods-besteffort-pod43bc02bf_5550_4f0b_901f_d8dafc5e7d95.slice - libcontainer container kubepods-besteffort-pod43bc02bf_5550_4f0b_901f_d8dafc5e7d95.slice. Oct 30 13:23:56.480874 kubelet[2746]: I1030 13:23:56.480772 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lb8m4" podStartSLOduration=2.873376212 podStartE2EDuration="20.48073068s" podCreationTimestamp="2025-10-30 13:23:36 +0000 UTC" firstStartedPulling="2025-10-30 13:23:37.743485797 +0000 UTC m=+18.577930052" lastFinishedPulling="2025-10-30 13:23:55.350840265 +0000 UTC m=+36.185284520" observedRunningTime="2025-10-30 13:23:56.479341829 +0000 UTC m=+37.313786115" watchObservedRunningTime="2025-10-30 13:23:56.48073068 +0000 UTC m=+37.315174935" Oct 30 13:23:56.542516 systemd[1]: Created slice kubepods-besteffort-pod786a9200_a386_410f_ada7_d8428b9d68f8.slice - libcontainer container kubepods-besteffort-pod786a9200_a386_410f_ada7_d8428b9d68f8.slice. Oct 30 13:23:56.592921 kubelet[2746]: I1030 13:23:56.592836 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/786a9200-a386-410f-ada7-d8428b9d68f8-whisker-backend-key-pair\") pod \"whisker-784cfbd5cb-gkdhk\" (UID: \"786a9200-a386-410f-ada7-d8428b9d68f8\") " pod="calico-system/whisker-784cfbd5cb-gkdhk" Oct 30 13:23:56.592921 kubelet[2746]: I1030 13:23:56.592914 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rdx\" (UniqueName: \"kubernetes.io/projected/786a9200-a386-410f-ada7-d8428b9d68f8-kube-api-access-x2rdx\") pod \"whisker-784cfbd5cb-gkdhk\" (UID: \"786a9200-a386-410f-ada7-d8428b9d68f8\") " pod="calico-system/whisker-784cfbd5cb-gkdhk" Oct 30 13:23:56.593150 kubelet[2746]: I1030 13:23:56.592982 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/786a9200-a386-410f-ada7-d8428b9d68f8-whisker-ca-bundle\") pod \"whisker-784cfbd5cb-gkdhk\" (UID: \"786a9200-a386-410f-ada7-d8428b9d68f8\") " pod="calico-system/whisker-784cfbd5cb-gkdhk" Oct 30 13:23:56.846665 containerd[1604]: time="2025-10-30T13:23:56.846584222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784cfbd5cb-gkdhk,Uid:786a9200-a386-410f-ada7-d8428b9d68f8,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:57.002709 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:49510.service - OpenSSH per-connection server daemon (10.0.0.1:49510). Oct 30 13:23:57.007344 systemd-networkd[1505]: cali5689a78a8d3: Link UP Oct 30 13:23:57.008533 systemd-networkd[1505]: cali5689a78a8d3: Gained carrier Oct 30 13:23:57.029773 containerd[1604]: 2025-10-30 13:23:56.871 [INFO][3894] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:23:57.029773 containerd[1604]: 2025-10-30 13:23:56.890 [INFO][3894] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0 whisker-784cfbd5cb- calico-system 786a9200-a386-410f-ada7-d8428b9d68f8 944 0 2025-10-30 13:23:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:784cfbd5cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-784cfbd5cb-gkdhk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5689a78a8d3 [] [] }} ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-" Oct 30 13:23:57.029773 containerd[1604]: 2025-10-30 13:23:56.890 [INFO][3894] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.029773 containerd[1604]: 2025-10-30 13:23:56.956 [INFO][3908] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" HandleID="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Workload="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.956 [INFO][3908] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" HandleID="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Workload="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000467e50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-784cfbd5cb-gkdhk", "timestamp":"2025-10-30 13:23:56.956453047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.957 [INFO][3908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.957 [INFO][3908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.957 [INFO][3908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.965 [INFO][3908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" host="localhost" Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.971 [INFO][3908] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.976 [INFO][3908] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.978 [INFO][3908] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.983 [INFO][3908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:23:57.030051 containerd[1604]: 2025-10-30 13:23:56.983 [INFO][3908] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" host="localhost" Oct 30 13:23:57.030305 containerd[1604]: 2025-10-30 13:23:56.985 [INFO][3908] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6 Oct 30 13:23:57.030305 containerd[1604]: 2025-10-30 13:23:56.989 [INFO][3908] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" host="localhost" Oct 30 13:23:57.030305 containerd[1604]: 2025-10-30 13:23:56.993 [INFO][3908] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" host="localhost" Oct 30 13:23:57.030305 containerd[1604]: 2025-10-30 13:23:56.993 [INFO][3908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" host="localhost" Oct 30 13:23:57.030305 containerd[1604]: 2025-10-30 13:23:56.994 [INFO][3908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:23:57.030305 containerd[1604]: 2025-10-30 13:23:56.994 [INFO][3908] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" HandleID="k8s-pod-network.a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Workload="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.030822 containerd[1604]: 2025-10-30 13:23:56.997 [INFO][3894] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0", GenerateName:"whisker-784cfbd5cb-", Namespace:"calico-system", SelfLink:"", UID:"786a9200-a386-410f-ada7-d8428b9d68f8", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784cfbd5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-784cfbd5cb-gkdhk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5689a78a8d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:23:57.030822 containerd[1604]: 2025-10-30 13:23:56.997 [INFO][3894] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.031008 containerd[1604]: 2025-10-30 13:23:56.997 [INFO][3894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5689a78a8d3 ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.031008 containerd[1604]: 2025-10-30 13:23:57.009 [INFO][3894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.031077 containerd[1604]: 2025-10-30 13:23:57.010 [INFO][3894] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0", GenerateName:"whisker-784cfbd5cb-", Namespace:"calico-system", SelfLink:"", UID:"786a9200-a386-410f-ada7-d8428b9d68f8", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784cfbd5cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6", Pod:"whisker-784cfbd5cb-gkdhk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5689a78a8d3", MAC:"8e:7a:65:6b:b1:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:23:57.031212 containerd[1604]: 2025-10-30 13:23:57.026 [INFO][3894] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" Namespace="calico-system" Pod="whisker-784cfbd5cb-gkdhk" WorkloadEndpoint="localhost-k8s-whisker--784cfbd5cb--gkdhk-eth0" Oct 30 13:23:57.086890 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 49510 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:23:57.086947 sshd-session[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:23:57.102179 systemd-logind[1582]: New session 8 of user core. Oct 30 13:23:57.107252 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 13:23:57.261079 kubelet[2746]: I1030 13:23:57.258656 2746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43bc02bf-5550-4f0b-901f-d8dafc5e7d95" path="/var/lib/kubelet/pods/43bc02bf-5550-4f0b-901f-d8dafc5e7d95/volumes" Oct 30 13:23:57.325528 containerd[1604]: time="2025-10-30T13:23:57.325437049Z" level=info msg="connecting to shim a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6" address="unix:///run/containerd/s/a20966a3d5308e78b8f8a31a7b70dae102d407fa81c1164092b0cad44ae3c39b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:57.376270 sshd[3973]: Connection closed by 10.0.0.1 port 49510 Oct 30 13:23:57.377275 systemd[1]: Started cri-containerd-a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6.scope - libcontainer container a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6. Oct 30 13:23:57.378627 sshd-session[3917]: pam_unix(sshd:session): session closed for user core Oct 30 13:23:57.382669 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:49510.service: Deactivated successfully. Oct 30 13:23:57.387165 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 13:23:57.390524 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Oct 30 13:23:57.391692 systemd-logind[1582]: Removed session 8. Oct 30 13:23:57.397183 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:23:57.661735 containerd[1604]: time="2025-10-30T13:23:57.661575587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784cfbd5cb-gkdhk,Uid:786a9200-a386-410f-ada7-d8428b9d68f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"a73db3d108fbe33e8521919e82c00680104a31b7fd182008c806eb49f2e61ee6\"" Oct 30 13:23:57.663598 containerd[1604]: time="2025-10-30T13:23:57.663548307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:23:58.013200 containerd[1604]: time="2025-10-30T13:23:58.012973734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:23:58.014343 containerd[1604]: time="2025-10-30T13:23:58.014271722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:23:58.019715 containerd[1604]: time="2025-10-30T13:23:58.019653562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:23:58.020019 kubelet[2746]: E1030 13:23:58.019946 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:23:58.020109 kubelet[2746]: E1030 13:23:58.020026 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:23:58.025412 kubelet[2746]: E1030 13:23:58.025343 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12669cb952d94aca8038ce3b6ca7fece,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2rdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784cfbd5cb-gkdhk_calico-system(786a9200-a386-410f-ada7-d8428b9d68f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:23:58.027884 containerd[1604]: time="2025-10-30T13:23:58.027562239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:23:58.045319 systemd-networkd[1505]: cali5689a78a8d3: Gained IPv6LL Oct 30 13:23:58.255949 containerd[1604]: time="2025-10-30T13:23:58.255873783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b486d6f4-rp89s,Uid:6ec85b99-8a3f-41cb-bb15-d4714acb86dc,Namespace:calico-system,Attempt:0,}" Oct 30 13:23:58.375526 systemd-networkd[1505]: cali8aaf1768b69: Link UP Oct 30 13:23:58.377792 systemd-networkd[1505]: cali8aaf1768b69: Gained carrier Oct 30 13:23:58.402331 containerd[1604]: 2025-10-30 13:23:58.287 [INFO][4088] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:23:58.402331 containerd[1604]: 2025-10-30 13:23:58.298 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0 calico-kube-controllers-77b486d6f4- calico-system 6ec85b99-8a3f-41cb-bb15-d4714acb86dc 861 0 2025-10-30 13:23:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77b486d6f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77b486d6f4-rp89s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8aaf1768b69 [] [] }} ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-" Oct 30 13:23:58.402331 containerd[1604]: 2025-10-30 13:23:58.298 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.402331 containerd[1604]: 2025-10-30 13:23:58.330 [INFO][4102] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" HandleID="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Workload="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.330 [INFO][4102] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" HandleID="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Workload="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77b486d6f4-rp89s", "timestamp":"2025-10-30 13:23:58.330270588 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.330 [INFO][4102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.330 [INFO][4102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.330 [INFO][4102] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.337 [INFO][4102] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" host="localhost" Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.341 [INFO][4102] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.347 [INFO][4102] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.349 [INFO][4102] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.351 [INFO][4102] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:23:58.402588 containerd[1604]: 2025-10-30 13:23:58.351 [INFO][4102] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" host="localhost" Oct 30 13:23:58.402874 containerd[1604]: 2025-10-30 13:23:58.353 [INFO][4102] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc Oct 30 13:23:58.402874 containerd[1604]: 2025-10-30 13:23:58.357 [INFO][4102] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" host="localhost" Oct 30 13:23:58.402874 containerd[1604]: 2025-10-30 13:23:58.363 [INFO][4102] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" host="localhost" Oct 30 13:23:58.402874 containerd[1604]: 2025-10-30 13:23:58.364 [INFO][4102] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" host="localhost" Oct 30 13:23:58.402874 containerd[1604]: 2025-10-30 13:23:58.364 [INFO][4102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:23:58.402874 containerd[1604]: 2025-10-30 13:23:58.364 [INFO][4102] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" HandleID="k8s-pod-network.f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Workload="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.402993 containerd[1604]: 2025-10-30 13:23:58.371 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0", GenerateName:"calico-kube-controllers-77b486d6f4-", Namespace:"calico-system", SelfLink:"", UID:"6ec85b99-8a3f-41cb-bb15-d4714acb86dc", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b486d6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77b486d6f4-rp89s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8aaf1768b69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:23:58.403046 containerd[1604]: 2025-10-30 13:23:58.371 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.403046 containerd[1604]: 2025-10-30 13:23:58.371 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8aaf1768b69 ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.403046 containerd[1604]: 2025-10-30 13:23:58.377 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.403143 containerd[1604]: 2025-10-30 13:23:58.380 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0", GenerateName:"calico-kube-controllers-77b486d6f4-", Namespace:"calico-system", SelfLink:"", UID:"6ec85b99-8a3f-41cb-bb15-d4714acb86dc", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b486d6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc", Pod:"calico-kube-controllers-77b486d6f4-rp89s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8aaf1768b69", MAC:"42:05:5d:ad:03:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:23:58.403198 containerd[1604]: 2025-10-30 13:23:58.392 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" Namespace="calico-system" Pod="calico-kube-controllers-77b486d6f4-rp89s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77b486d6f4--rp89s-eth0" Oct 30 13:23:58.446361 containerd[1604]: time="2025-10-30T13:23:58.446296537Z" level=info msg="connecting to shim f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc" address="unix:///run/containerd/s/a9e551c31b97f28e31c65a83060b7fc159389d751f90c707483f256f0d78e349" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:23:58.479360 systemd[1]: Started cri-containerd-f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc.scope - libcontainer container f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc. Oct 30 13:23:58.495027 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:23:58.518610 containerd[1604]: time="2025-10-30T13:23:58.518096355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:23:58.521341 containerd[1604]: time="2025-10-30T13:23:58.520445595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:23:58.521341 containerd[1604]: time="2025-10-30T13:23:58.520541016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:23:58.521526 kubelet[2746]: E1030 13:23:58.520746 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:23:58.521526 kubelet[2746]: E1030 13:23:58.520799 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:23:58.522261 kubelet[2746]: E1030 13:23:58.520911 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784cfbd5cb-gkdhk_calico-system(786a9200-a386-410f-ada7-d8428b9d68f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:23:58.522261 kubelet[2746]: E1030 13:23:58.522232 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784cfbd5cb-gkdhk" podUID="786a9200-a386-410f-ada7-d8428b9d68f8" Oct 30 13:23:58.531299 containerd[1604]: time="2025-10-30T13:23:58.531246221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b486d6f4-rp89s,Uid:6ec85b99-8a3f-41cb-bb15-d4714acb86dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8a5ae46b661ee7a4010644b903efaebdb24994091e9d7dcae4dfcac87d480dc\"" Oct 30 13:23:58.532917 containerd[1604]: time="2025-10-30T13:23:58.532857086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:23:58.919853 containerd[1604]: time="2025-10-30T13:23:58.919761135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:23:58.933757 containerd[1604]: time="2025-10-30T13:23:58.933706739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:23:58.934054 kubelet[2746]: E1030 13:23:58.933973 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:23:58.934054 kubelet[2746]: E1030 13:23:58.934038 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:23:58.934304 kubelet[2746]: E1030 13:23:58.934209 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b6f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b486d6f4-rp89s_calico-system(6ec85b99-8a3f-41cb-bb15-d4714acb86dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:23:58.935410 kubelet[2746]: E1030 13:23:58.935368 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:23:58.945036 containerd[1604]: time="2025-10-30T13:23:58.933740506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:23:59.255843 containerd[1604]: time="2025-10-30T13:23:59.255632510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-956xq,Uid:e6887d28-f8bd-4d4f-b72b-6f4de4992ef6,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:23:59.377508 kubelet[2746]: E1030 13:23:59.377372 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:23:59.380466 kubelet[2746]: E1030 13:23:59.379890 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784cfbd5cb-gkdhk" podUID="786a9200-a386-410f-ada7-d8428b9d68f8" Oct 30 13:23:59.901354 systemd-networkd[1505]: cali8aaf1768b69: Gained IPv6LL Oct 30 13:24:00.255575 containerd[1604]: time="2025-10-30T13:24:00.255412237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jhg5h,Uid:9d8a307a-f326-4a48-b1d2-dfbdf32a2608,Namespace:calico-system,Attempt:0,}" Oct 30 13:24:00.256692 containerd[1604]: time="2025-10-30T13:24:00.256667489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t6tn,Uid:174a8f7f-c864-44be-b45c-d548b2df28c8,Namespace:calico-system,Attempt:0,}" Oct 30 13:24:00.309077 systemd-networkd[1505]: calif51d139d547: Link UP Oct 30 13:24:00.309991 systemd-networkd[1505]: calif51d139d547: Gained carrier Oct 30 13:24:00.335818 containerd[1604]: 2025-10-30 13:23:59.340 [INFO][4185] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:24:00.335818 containerd[1604]: 2025-10-30 13:23:59.351 [INFO][4185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0 calico-apiserver-9656b5c49- calico-apiserver e6887d28-f8bd-4d4f-b72b-6f4de4992ef6 863 0 2025-10-30 13:23:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9656b5c49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9656b5c49-956xq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif51d139d547 [] [] }} ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-" Oct 30 13:24:00.335818 containerd[1604]: 2025-10-30 13:23:59.351 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.335818 containerd[1604]: 2025-10-30 13:23:59.383 [INFO][4200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" HandleID="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Workload="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:23:59.384 [INFO][4200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" HandleID="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Workload="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9656b5c49-956xq", "timestamp":"2025-10-30 13:23:59.383745325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:23:59.384 [INFO][4200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:23:59.384 [INFO][4200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:23:59.384 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:23:59.718 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" host="localhost" Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:24:00.255 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:24:00.260 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:24:00.266 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:24:00.268 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:00.336205 containerd[1604]: 2025-10-30 13:24:00.268 [INFO][4200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" host="localhost" Oct 30 13:24:00.336606 containerd[1604]: 2025-10-30 13:24:00.270 [INFO][4200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e Oct 30 13:24:00.336606 containerd[1604]: 2025-10-30 13:24:00.277 [INFO][4200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" host="localhost" Oct 30 13:24:00.336606 containerd[1604]: 2025-10-30 13:24:00.288 [INFO][4200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" host="localhost" Oct 30 13:24:00.336606 containerd[1604]: 2025-10-30 13:24:00.289 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" host="localhost" Oct 30 13:24:00.336606 containerd[1604]: 2025-10-30 13:24:00.291 [INFO][4200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:24:00.336606 containerd[1604]: 2025-10-30 13:24:00.292 [INFO][4200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" HandleID="k8s-pod-network.85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Workload="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.336798 containerd[1604]: 2025-10-30 13:24:00.306 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0", GenerateName:"calico-apiserver-9656b5c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6887d28-f8bd-4d4f-b72b-6f4de4992ef6", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9656b5c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9656b5c49-956xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif51d139d547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:00.336886 containerd[1604]: 2025-10-30 13:24:00.306 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.336886 containerd[1604]: 2025-10-30 13:24:00.306 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif51d139d547 ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.336886 containerd[1604]: 2025-10-30 13:24:00.310 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.337015 containerd[1604]: 2025-10-30 13:24:00.311 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0", GenerateName:"calico-apiserver-9656b5c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6887d28-f8bd-4d4f-b72b-6f4de4992ef6", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9656b5c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e", Pod:"calico-apiserver-9656b5c49-956xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif51d139d547", MAC:"3e:33:f2:de:38:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:00.337097 containerd[1604]: 2025-10-30 13:24:00.330 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-956xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--956xq-eth0" Oct 30 13:24:00.380904 kubelet[2746]: E1030 13:24:00.380523 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:24:00.383623 containerd[1604]: time="2025-10-30T13:24:00.383466825Z" level=info msg="connecting to shim 85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e" address="unix:///run/containerd/s/813075ec8d58284dbcbc8e2d227bc976c64744e5bfc5fe4279df09f26a0f8cb0" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:24:00.438352 systemd[1]: Started cri-containerd-85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e.scope - libcontainer container 85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e. Oct 30 13:24:00.454256 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:24:00.492823 containerd[1604]: time="2025-10-30T13:24:00.492754339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-956xq,Uid:e6887d28-f8bd-4d4f-b72b-6f4de4992ef6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"85d0c2ca1a4885895ae7e1a4a5d32991fba0ad8a982e0e86f3aba6bdecb1229e\"" Oct 30 13:24:00.495357 containerd[1604]: time="2025-10-30T13:24:00.495323352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:24:00.504867 systemd-networkd[1505]: caliefc1f894c9f: Link UP Oct 30 13:24:00.505143 systemd-networkd[1505]: caliefc1f894c9f: Gained carrier Oct 30 13:24:00.522181 containerd[1604]: 2025-10-30 13:24:00.321 [INFO][4235] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:24:00.522181 containerd[1604]: 2025-10-30 13:24:00.338 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--jhg5h-eth0 goldmane-666569f655- calico-system 9d8a307a-f326-4a48-b1d2-dfbdf32a2608 859 0 2025-10-30 13:23:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-jhg5h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliefc1f894c9f [] [] }} ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-" Oct 30 13:24:00.522181 containerd[1604]: 2025-10-30 13:24:00.338 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.522181 containerd[1604]: 2025-10-30 13:24:00.396 [INFO][4270] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" HandleID="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Workload="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.397 [INFO][4270] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" HandleID="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Workload="localhost-k8s-goldmane--666569f655--jhg5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-jhg5h", "timestamp":"2025-10-30 13:24:00.396562759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.398 [INFO][4270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.398 [INFO][4270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.398 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.415 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" host="localhost" Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.470 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.476 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.479 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.481 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:00.522458 containerd[1604]: 2025-10-30 13:24:00.481 [INFO][4270] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" host="localhost" Oct 30 13:24:00.522676 containerd[1604]: 2025-10-30 13:24:00.483 [INFO][4270] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b Oct 30 13:24:00.522676 containerd[1604]: 2025-10-30 13:24:00.487 [INFO][4270] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" host="localhost" Oct 30 13:24:00.522676 containerd[1604]: 2025-10-30 13:24:00.494 [INFO][4270] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" host="localhost" Oct 30 13:24:00.522676 containerd[1604]: 2025-10-30 13:24:00.494 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" host="localhost" Oct 30 13:24:00.522676 containerd[1604]: 2025-10-30 13:24:00.494 [INFO][4270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:24:00.522676 containerd[1604]: 2025-10-30 13:24:00.494 [INFO][4270] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" HandleID="k8s-pod-network.6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Workload="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.522814 containerd[1604]: 2025-10-30 13:24:00.501 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jhg5h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d8a307a-f326-4a48-b1d2-dfbdf32a2608", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-jhg5h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliefc1f894c9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:00.522814 containerd[1604]: 2025-10-30 13:24:00.501 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.522888 containerd[1604]: 2025-10-30 13:24:00.501 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefc1f894c9f ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.522888 containerd[1604]: 2025-10-30 13:24:00.505 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.522937 containerd[1604]: 2025-10-30 13:24:00.508 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jhg5h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d8a307a-f326-4a48-b1d2-dfbdf32a2608", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b", Pod:"goldmane-666569f655-jhg5h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliefc1f894c9f", MAC:"ea:86:00:5e:15:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:00.523056 containerd[1604]: 2025-10-30 13:24:00.518 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" Namespace="calico-system" Pod="goldmane-666569f655-jhg5h" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jhg5h-eth0" Oct 30 13:24:00.545098 containerd[1604]: time="2025-10-30T13:24:00.545048333Z" level=info msg="connecting to shim 6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b" address="unix:///run/containerd/s/9a03a863fa62d3abf4416a778eb01cf04daf3b993fa2eaf4fe5b90b759e7dd23" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:24:00.574537 systemd[1]: Started cri-containerd-6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b.scope - libcontainer container 6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b. Oct 30 13:24:00.592020 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:24:00.601206 systemd-networkd[1505]: cali6f89ba8dc47: Link UP Oct 30 13:24:00.601466 systemd-networkd[1505]: cali6f89ba8dc47: Gained carrier Oct 30 13:24:00.614293 containerd[1604]: 2025-10-30 13:24:00.342 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:24:00.614293 containerd[1604]: 2025-10-30 13:24:00.361 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2t6tn-eth0 csi-node-driver- calico-system 174a8f7f-c864-44be-b45c-d548b2df28c8 751 0 2025-10-30 13:23:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2t6tn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6f89ba8dc47 [] [] }} ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-" Oct 30 13:24:00.614293 containerd[1604]: 2025-10-30 13:24:00.361 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.614293 containerd[1604]: 2025-10-30 13:24:00.427 [INFO][4288] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" HandleID="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Workload="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.427 [INFO][4288] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" HandleID="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Workload="localhost-k8s-csi--node--driver--2t6tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000520390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2t6tn", "timestamp":"2025-10-30 13:24:00.427057578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.427 [INFO][4288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.494 [INFO][4288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.494 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.516 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" host="localhost" Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.570 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.574 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.578 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.580 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:00.614563 containerd[1604]: 2025-10-30 13:24:00.580 [INFO][4288] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" host="localhost" Oct 30 13:24:00.614786 containerd[1604]: 2025-10-30 13:24:00.582 [INFO][4288] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e Oct 30 13:24:00.614786 containerd[1604]: 2025-10-30 13:24:00.586 [INFO][4288] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" host="localhost" Oct 30 13:24:00.614786 containerd[1604]: 2025-10-30 13:24:00.591 [INFO][4288] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" host="localhost" Oct 30 13:24:00.614786 containerd[1604]: 2025-10-30 13:24:00.591 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" host="localhost" Oct 30 13:24:00.614786 containerd[1604]: 2025-10-30 13:24:00.591 [INFO][4288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:24:00.614786 containerd[1604]: 2025-10-30 13:24:00.591 [INFO][4288] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" HandleID="k8s-pod-network.f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Workload="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.615101 containerd[1604]: 2025-10-30 13:24:00.595 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2t6tn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"174a8f7f-c864-44be-b45c-d548b2df28c8", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2t6tn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f89ba8dc47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:00.615218 containerd[1604]: 2025-10-30 13:24:00.595 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.615218 containerd[1604]: 2025-10-30 13:24:00.595 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f89ba8dc47 ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.615218 containerd[1604]: 2025-10-30 13:24:00.599 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.615330 containerd[1604]: 2025-10-30 13:24:00.600 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2t6tn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"174a8f7f-c864-44be-b45c-d548b2df28c8", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e", Pod:"csi-node-driver-2t6tn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f89ba8dc47", MAC:"b6:f8:8f:54:d3:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:00.615411 containerd[1604]: 2025-10-30 13:24:00.609 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" Namespace="calico-system" Pod="csi-node-driver-2t6tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2t6tn-eth0" Oct 30 13:24:00.630885 containerd[1604]: time="2025-10-30T13:24:00.630847161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jhg5h,Uid:9d8a307a-f326-4a48-b1d2-dfbdf32a2608,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c4e457784dedee5a364c8a4e2ccd0152ec31128eb1c2a0741428e7b19b0500b\"" Oct 30 13:24:00.643960 containerd[1604]: time="2025-10-30T13:24:00.643913255Z" level=info msg="connecting to shim f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e" address="unix:///run/containerd/s/c17d5e38cf3303c3a447df0a4e89a72b844f0c171b48f594e9b48484a71e964e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:24:00.677282 systemd[1]: Started cri-containerd-f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e.scope - libcontainer container f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e. Oct 30 13:24:00.690731 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:24:00.705142 containerd[1604]: time="2025-10-30T13:24:00.705073564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t6tn,Uid:174a8f7f-c864-44be-b45c-d548b2df28c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8e0e78624af5b83bd0c32ad8f883e4f027c5e12a1aececb3f7f427fa073703e\"" Oct 30 13:24:00.887874 containerd[1604]: time="2025-10-30T13:24:00.887801890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:00.893805 containerd[1604]: time="2025-10-30T13:24:00.893742501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:24:00.893805 containerd[1604]: time="2025-10-30T13:24:00.893806509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:00.894112 kubelet[2746]: E1030 13:24:00.894036 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:00.894235 kubelet[2746]: E1030 13:24:00.894142 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:00.894512 kubelet[2746]: E1030 13:24:00.894460 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkpl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9656b5c49-956xq_calico-apiserver(e6887d28-f8bd-4d4f-b72b-6f4de4992ef6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:00.894624 containerd[1604]: time="2025-10-30T13:24:00.894570779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:24:00.895859 kubelet[2746]: E1030 13:24:00.895773 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:01.255673 kubelet[2746]: E1030 13:24:01.254973 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:01.255808 containerd[1604]: time="2025-10-30T13:24:01.255419315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frj2l,Uid:b3ef1af1-9e52-4beb-a533-c27d9b225aed,Namespace:kube-system,Attempt:0,}" Oct 30 13:24:01.255972 containerd[1604]: time="2025-10-30T13:24:01.255941569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-8rh5p,Uid:789594bd-b894-4820-937c-e5586bffb18c,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:24:01.375219 systemd-networkd[1505]: calic753128fc90: Link UP Oct 30 13:24:01.375899 systemd-networkd[1505]: calic753128fc90: Gained carrier Oct 30 13:24:01.387459 containerd[1604]: 2025-10-30 13:24:01.284 [INFO][4459] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:24:01.387459 containerd[1604]: 2025-10-30 13:24:01.295 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--frj2l-eth0 coredns-668d6bf9bc- kube-system b3ef1af1-9e52-4beb-a533-c27d9b225aed 865 0 2025-10-30 13:23:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-frj2l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic753128fc90 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-" Oct 30 13:24:01.387459 containerd[1604]: 2025-10-30 13:24:01.296 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.387459 containerd[1604]: 2025-10-30 13:24:01.328 [INFO][4487] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" HandleID="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Workload="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.328 [INFO][4487] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" HandleID="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Workload="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-frj2l", "timestamp":"2025-10-30 13:24:01.328391601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.328 [INFO][4487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.328 [INFO][4487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.328 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.336 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" host="localhost" Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.343 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.351 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.352 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.354 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:01.388106 containerd[1604]: 2025-10-30 13:24:01.354 [INFO][4487] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" host="localhost" Oct 30 13:24:01.388955 containerd[1604]: 2025-10-30 13:24:01.356 [INFO][4487] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236 Oct 30 13:24:01.388955 containerd[1604]: 2025-10-30 13:24:01.360 [INFO][4487] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" host="localhost" Oct 30 13:24:01.388955 containerd[1604]: 2025-10-30 13:24:01.366 [INFO][4487] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" host="localhost" Oct 30 13:24:01.388955 containerd[1604]: 2025-10-30 13:24:01.366 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" host="localhost" Oct 30 13:24:01.388955 containerd[1604]: 2025-10-30 13:24:01.366 [INFO][4487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:24:01.388955 containerd[1604]: 2025-10-30 13:24:01.366 [INFO][4487] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" HandleID="k8s-pod-network.2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Workload="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.389198 containerd[1604]: 2025-10-30 13:24:01.370 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--frj2l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b3ef1af1-9e52-4beb-a533-c27d9b225aed", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-frj2l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic753128fc90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:01.389289 containerd[1604]: 2025-10-30 13:24:01.370 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.389289 containerd[1604]: 2025-10-30 13:24:01.370 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic753128fc90 ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.389289 containerd[1604]: 2025-10-30 13:24:01.375 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.389401 containerd[1604]: 2025-10-30 13:24:01.376 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--frj2l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b3ef1af1-9e52-4beb-a533-c27d9b225aed", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236", Pod:"coredns-668d6bf9bc-frj2l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic753128fc90", MAC:"2e:fd:25:d3:d2:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:01.389401 containerd[1604]: 2025-10-30 13:24:01.384 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" Namespace="kube-system" Pod="coredns-668d6bf9bc-frj2l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--frj2l-eth0" Oct 30 13:24:01.393878 kubelet[2746]: E1030 13:24:01.393826 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:01.420098 containerd[1604]: time="2025-10-30T13:24:01.420031650Z" level=info msg="connecting to shim 2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236" address="unix:///run/containerd/s/fbdce538396949e4eee0b66a27318a83bcb9a4d957f098e498e2532d1ce883c4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:24:01.457574 systemd[1]: Started cri-containerd-2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236.scope - libcontainer container 2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236. Oct 30 13:24:01.473824 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:24:01.484025 systemd-networkd[1505]: cali9b50334e472: Link UP Oct 30 13:24:01.484990 systemd-networkd[1505]: cali9b50334e472: Gained carrier Oct 30 13:24:01.558803 containerd[1604]: time="2025-10-30T13:24:01.558190835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frj2l,Uid:b3ef1af1-9e52-4beb-a533-c27d9b225aed,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236\"" Oct 30 13:24:01.559887 kubelet[2746]: E1030 13:24:01.559816 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:01.564484 containerd[1604]: time="2025-10-30T13:24:01.564440783Z" level=info msg="CreateContainer within sandbox \"2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.287 [INFO][4465] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.304 [INFO][4465] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0 calico-apiserver-9656b5c49- calico-apiserver 789594bd-b894-4820-937c-e5586bffb18c 864 0 2025-10-30 13:23:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9656b5c49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9656b5c49-8rh5p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b50334e472 [] [] }} ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.304 [INFO][4465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.348 [INFO][4493] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" HandleID="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Workload="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.348 [INFO][4493] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" HandleID="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Workload="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9656b5c49-8rh5p", "timestamp":"2025-10-30 13:24:01.348202198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.348 [INFO][4493] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.366 [INFO][4493] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.366 [INFO][4493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.438 [INFO][4493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.447 [INFO][4493] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.452 [INFO][4493] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.454 [INFO][4493] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.456 [INFO][4493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.456 [INFO][4493] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.458 [INFO][4493] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830 Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.463 [INFO][4493] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.474 [INFO][4493] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.474 [INFO][4493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" host="localhost" Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.475 [INFO][4493] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:24:01.566158 containerd[1604]: 2025-10-30 13:24:01.475 [INFO][4493] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" HandleID="k8s-pod-network.1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Workload="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.567298 containerd[1604]: 2025-10-30 13:24:01.479 [INFO][4465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0", GenerateName:"calico-apiserver-9656b5c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"789594bd-b894-4820-937c-e5586bffb18c", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9656b5c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9656b5c49-8rh5p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b50334e472", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:01.567298 containerd[1604]: 2025-10-30 13:24:01.479 [INFO][4465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.567298 containerd[1604]: 2025-10-30 13:24:01.479 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b50334e472 ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.567298 containerd[1604]: 2025-10-30 13:24:01.487 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.567298 containerd[1604]: 2025-10-30 13:24:01.487 [INFO][4465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0", GenerateName:"calico-apiserver-9656b5c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"789594bd-b894-4820-937c-e5586bffb18c", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9656b5c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830", Pod:"calico-apiserver-9656b5c49-8rh5p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b50334e472", MAC:"d6:84:d0:53:1c:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:01.567298 containerd[1604]: 2025-10-30 13:24:01.559 [INFO][4465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" Namespace="calico-apiserver" Pod="calico-apiserver-9656b5c49-8rh5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--9656b5c49--8rh5p-eth0" Oct 30 13:24:01.584314 containerd[1604]: time="2025-10-30T13:24:01.584266811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:01.586345 containerd[1604]: time="2025-10-30T13:24:01.586303160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:24:01.586413 containerd[1604]: time="2025-10-30T13:24:01.586401627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:01.586686 kubelet[2746]: E1030 13:24:01.586617 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:24:01.586686 kubelet[2746]: E1030 13:24:01.586684 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:24:01.587014 containerd[1604]: time="2025-10-30T13:24:01.586962257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:24:01.587331 kubelet[2746]: E1030 13:24:01.587280 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nwng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jhg5h_calico-system(9d8a307a-f326-4a48-b1d2-dfbdf32a2608): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:01.588426 containerd[1604]: time="2025-10-30T13:24:01.588388396Z" level=info msg="Container 546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:24:01.588637 kubelet[2746]: E1030 13:24:01.588602 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:24:01.590078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205350070.mount: Deactivated successfully. Oct 30 13:24:01.599145 containerd[1604]: time="2025-10-30T13:24:01.599089680Z" level=info msg="CreateContainer within sandbox \"2bdac49c3d193cea6abd0dbc730d240b59e8c7d3c6ae6fffd4c1d2bb4a077236\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066\"" Oct 30 13:24:01.602745 containerd[1604]: time="2025-10-30T13:24:01.602679794Z" level=info msg="connecting to shim 1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830" address="unix:///run/containerd/s/f889810430537cba763c79579d9572dba0e64210ec0d8d38a2a46a8add74aa6d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:24:01.607139 containerd[1604]: time="2025-10-30T13:24:01.606048896Z" level=info msg="StartContainer for \"546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066\"" Oct 30 13:24:01.607139 containerd[1604]: time="2025-10-30T13:24:01.606954586Z" level=info msg="connecting to shim 546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066" address="unix:///run/containerd/s/fbdce538396949e4eee0b66a27318a83bcb9a4d957f098e498e2532d1ce883c4" protocol=ttrpc version=3 Oct 30 13:24:01.634269 systemd[1]: Started cri-containerd-546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066.scope - libcontainer container 546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066. Oct 30 13:24:01.638353 systemd[1]: Started cri-containerd-1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830.scope - libcontainer container 1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830. Oct 30 13:24:01.655028 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:24:01.680026 containerd[1604]: time="2025-10-30T13:24:01.679975099Z" level=info msg="StartContainer for \"546a037727828afcfb1cd64e368a58560ff879b7eabf87f571c74f6a8e8db066\" returns successfully" Oct 30 13:24:01.699609 containerd[1604]: time="2025-10-30T13:24:01.699560615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9656b5c49-8rh5p,Uid:789594bd-b894-4820-937c-e5586bffb18c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1203be51e7ea46d5a05edec45f7075f27aefebe3d9a0396506d9662bde73a830\"" Oct 30 13:24:01.757317 systemd-networkd[1505]: caliefc1f894c9f: Gained IPv6LL Oct 30 13:24:02.206288 systemd-networkd[1505]: cali6f89ba8dc47: Gained IPv6LL Oct 30 13:24:02.255641 kubelet[2746]: E1030 13:24:02.255582 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:02.256157 containerd[1604]: time="2025-10-30T13:24:02.256088362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m4gfm,Uid:413d6b3f-f010-4e44-b7d6-60a3ec02eda4,Namespace:kube-system,Attempt:0,}" Oct 30 13:24:02.270875 systemd-networkd[1505]: calif51d139d547: Gained IPv6LL Oct 30 13:24:02.372827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518405629.mount: Deactivated successfully. Oct 30 13:24:02.380292 systemd-networkd[1505]: cali619cdddc0cc: Link UP Oct 30 13:24:02.380614 systemd-networkd[1505]: cali619cdddc0cc: Gained carrier Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.285 [INFO][4669] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.296 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0 coredns-668d6bf9bc- kube-system 413d6b3f-f010-4e44-b7d6-60a3ec02eda4 855 0 2025-10-30 13:23:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m4gfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali619cdddc0cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.296 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.324 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" HandleID="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Workload="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.324 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" HandleID="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Workload="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m4gfm", "timestamp":"2025-10-30 13:24:02.324289279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.324 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.324 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.324 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.331 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.336 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.345 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.349 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.352 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.352 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.354 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9 Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.359 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.371 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.371 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" host="localhost" Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.371 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:24:02.396618 containerd[1604]: 2025-10-30 13:24:02.371 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" HandleID="k8s-pod-network.509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Workload="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.397912 containerd[1604]: 2025-10-30 13:24:02.377 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"413d6b3f-f010-4e44-b7d6-60a3ec02eda4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m4gfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali619cdddc0cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:02.397912 containerd[1604]: 2025-10-30 13:24:02.377 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.397912 containerd[1604]: 2025-10-30 13:24:02.377 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali619cdddc0cc ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.397912 containerd[1604]: 2025-10-30 13:24:02.380 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.397912 containerd[1604]: 2025-10-30 13:24:02.381 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"413d6b3f-f010-4e44-b7d6-60a3ec02eda4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9", Pod:"coredns-668d6bf9bc-m4gfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali619cdddc0cc", MAC:"ca:8e:44:d0:ea:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:24:02.397912 containerd[1604]: 2025-10-30 13:24:02.391 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-m4gfm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m4gfm-eth0" Oct 30 13:24:02.397723 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:49514.service - OpenSSH per-connection server daemon (10.0.0.1:49514). Oct 30 13:24:02.405794 kubelet[2746]: E1030 13:24:02.405738 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:02.407362 kubelet[2746]: E1030 13:24:02.407306 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:02.407600 kubelet[2746]: E1030 13:24:02.407373 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:24:02.442484 containerd[1604]: time="2025-10-30T13:24:02.442404954Z" level=info msg="connecting to shim 509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9" address="unix:///run/containerd/s/de3618557ca942c4fa22d4cd103d1d2e7643807e2d475441a3f9a9580ea4ae60" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:24:02.454944 kubelet[2746]: I1030 13:24:02.454874 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-frj2l" podStartSLOduration=38.454853417 podStartE2EDuration="38.454853417s" podCreationTimestamp="2025-10-30 13:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:24:02.44117008 +0000 UTC m=+43.275614355" watchObservedRunningTime="2025-10-30 13:24:02.454853417 +0000 UTC m=+43.289297672" Oct 30 13:24:02.493375 systemd[1]: Started cri-containerd-509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9.scope - libcontainer container 509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9. Oct 30 13:24:02.495255 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 49514 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:02.497016 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:02.502581 systemd-logind[1582]: New session 9 of user core. Oct 30 13:24:02.508279 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 13:24:02.514198 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:24:02.519281 containerd[1604]: time="2025-10-30T13:24:02.519226578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:02.520400 containerd[1604]: time="2025-10-30T13:24:02.520335501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:24:02.522196 containerd[1604]: time="2025-10-30T13:24:02.520639356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:24:02.522311 kubelet[2746]: E1030 13:24:02.522261 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:24:02.522384 kubelet[2746]: E1030 13:24:02.522323 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:24:02.522639 kubelet[2746]: E1030 13:24:02.522556 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xptf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:02.523043 containerd[1604]: time="2025-10-30T13:24:02.522984144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:24:02.556064 containerd[1604]: time="2025-10-30T13:24:02.556013335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m4gfm,Uid:413d6b3f-f010-4e44-b7d6-60a3ec02eda4,Namespace:kube-system,Attempt:0,} returns sandbox id \"509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9\"" Oct 30 13:24:02.557264 kubelet[2746]: E1030 13:24:02.557225 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:02.560795 containerd[1604]: time="2025-10-30T13:24:02.560743272Z" level=info msg="CreateContainer within sandbox \"509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:24:02.577159 containerd[1604]: time="2025-10-30T13:24:02.576408563Z" level=info msg="Container 9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:24:02.584982 containerd[1604]: time="2025-10-30T13:24:02.584924351Z" level=info msg="CreateContainer within sandbox \"509d9715662ad9a9256aa5731815365be6e4fa9af1cd63d7dc1129e38c53c0d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192\"" Oct 30 13:24:02.587084 containerd[1604]: time="2025-10-30T13:24:02.585898044Z" level=info msg="StartContainer for \"9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192\"" Oct 30 13:24:02.587084 containerd[1604]: time="2025-10-30T13:24:02.587001646Z" level=info msg="connecting to shim 9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192" address="unix:///run/containerd/s/de3618557ca942c4fa22d4cd103d1d2e7643807e2d475441a3f9a9580ea4ae60" protocol=ttrpc version=3 Oct 30 13:24:02.619328 systemd[1]: Started cri-containerd-9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192.scope - libcontainer container 9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192. Oct 30 13:24:02.631204 sshd[4748]: Connection closed by 10.0.0.1 port 49514 Oct 30 13:24:02.631544 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:02.638261 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:49514.service: Deactivated successfully. Oct 30 13:24:02.641360 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 13:24:02.642632 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Oct 30 13:24:02.645689 systemd-logind[1582]: Removed session 9. Oct 30 13:24:02.660087 containerd[1604]: time="2025-10-30T13:24:02.660036076Z" level=info msg="StartContainer for \"9f6a239d6c19623f54557f005fac9e8a0fd45302417e8b07f6dac433a4e18192\" returns successfully" Oct 30 13:24:02.717288 systemd-networkd[1505]: cali9b50334e472: Gained IPv6LL Oct 30 13:24:02.717673 systemd-networkd[1505]: calic753128fc90: Gained IPv6LL Oct 30 13:24:03.034807 containerd[1604]: time="2025-10-30T13:24:03.034743292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:03.042400 containerd[1604]: time="2025-10-30T13:24:03.042318490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:24:03.042400 containerd[1604]: time="2025-10-30T13:24:03.042366856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:03.042553 kubelet[2746]: E1030 13:24:03.042499 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:03.042655 kubelet[2746]: E1030 13:24:03.042560 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:03.043056 containerd[1604]: time="2025-10-30T13:24:03.042859347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:24:03.043250 kubelet[2746]: E1030 13:24:03.042877 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hptj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9656b5c49-8rh5p_calico-apiserver(789594bd-b894-4820-937c-e5586bffb18c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:03.044149 kubelet[2746]: E1030 13:24:03.044089 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" podUID="789594bd-b894-4820-937c-e5586bffb18c" Oct 30 13:24:03.368711 containerd[1604]: time="2025-10-30T13:24:03.368521888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:03.409564 kubelet[2746]: E1030 13:24:03.408248 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:03.409564 kubelet[2746]: E1030 13:24:03.408980 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:03.409564 kubelet[2746]: E1030 13:24:03.409445 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" podUID="789594bd-b894-4820-937c-e5586bffb18c" Oct 30 13:24:03.436941 containerd[1604]: time="2025-10-30T13:24:03.436811553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:24:03.436941 containerd[1604]: time="2025-10-30T13:24:03.436871332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:24:03.437527 kubelet[2746]: E1030 13:24:03.437012 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:24:03.437527 kubelet[2746]: E1030 13:24:03.437094 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:24:03.437527 kubelet[2746]: E1030 13:24:03.437292 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xptf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:03.438855 kubelet[2746]: E1030 13:24:03.438798 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:24:03.465766 kubelet[2746]: I1030 13:24:03.465663 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m4gfm" podStartSLOduration=39.465647297 podStartE2EDuration="39.465647297s" podCreationTimestamp="2025-10-30 13:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:24:03.465289073 +0000 UTC m=+44.299733328" watchObservedRunningTime="2025-10-30 13:24:03.465647297 +0000 UTC m=+44.300091552" Oct 30 13:24:03.832513 kubelet[2746]: I1030 13:24:03.832452 2746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:24:03.832973 kubelet[2746]: E1030 13:24:03.832920 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:03.835338 kubelet[2746]: I1030 13:24:03.835271 2746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:24:03.835996 kubelet[2746]: E1030 13:24:03.835949 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:04.007589 containerd[1604]: time="2025-10-30T13:24:04.007530977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb\" id:\"2905cb56b318e5f255b12c80dd794452b442ebbbd6c703502993e75ec234a667\" pid:4843 exit_status:1 exited_at:{seconds:1761830644 nanos:6764012}" Oct 30 13:24:04.093284 containerd[1604]: time="2025-10-30T13:24:04.093142790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb\" id:\"484bccc4515e37d38aaec9205fcee3def38a79fbba2e897dabba38639d08c828\" pid:4885 exit_status:1 exited_at:{seconds:1761830644 nanos:92789889}" Oct 30 13:24:04.381307 systemd-networkd[1505]: cali619cdddc0cc: Gained IPv6LL Oct 30 13:24:04.410589 kubelet[2746]: E1030 13:24:04.410197 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:04.410589 kubelet[2746]: E1030 13:24:04.410406 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:04.411061 kubelet[2746]: E1030 13:24:04.410716 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:04.415949 kubelet[2746]: E1030 13:24:04.415885 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:24:04.916656 systemd-networkd[1505]: vxlan.calico: Link UP Oct 30 13:24:04.916667 systemd-networkd[1505]: vxlan.calico: Gained carrier Oct 30 13:24:05.413508 kubelet[2746]: E1030 13:24:05.413094 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:05.413508 kubelet[2746]: E1030 13:24:05.413281 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:06.366356 systemd-networkd[1505]: vxlan.calico: Gained IPv6LL Oct 30 13:24:06.414585 kubelet[2746]: E1030 13:24:06.414549 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:07.647903 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:55910.service - OpenSSH per-connection server daemon (10.0.0.1:55910). Oct 30 13:24:07.738463 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 55910 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:07.780296 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:07.784799 systemd-logind[1582]: New session 10 of user core. Oct 30 13:24:07.795261 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 13:24:07.890166 sshd[5050]: Connection closed by 10.0.0.1 port 55910 Oct 30 13:24:07.890442 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:07.895476 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:55910.service: Deactivated successfully. Oct 30 13:24:07.897704 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 13:24:07.898587 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Oct 30 13:24:07.899843 systemd-logind[1582]: Removed session 10. Oct 30 13:24:12.256053 containerd[1604]: time="2025-10-30T13:24:12.255988884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:24:12.636814 containerd[1604]: time="2025-10-30T13:24:12.636750342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:12.637870 containerd[1604]: time="2025-10-30T13:24:12.637834013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:24:12.637952 containerd[1604]: time="2025-10-30T13:24:12.637873231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:24:12.638113 kubelet[2746]: E1030 13:24:12.638064 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:24:12.638560 kubelet[2746]: E1030 13:24:12.638142 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:24:12.638560 kubelet[2746]: E1030 13:24:12.638412 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12669cb952d94aca8038ce3b6ca7fece,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2rdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784cfbd5cb-gkdhk_calico-system(786a9200-a386-410f-ada7-d8428b9d68f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:12.638730 containerd[1604]: time="2025-10-30T13:24:12.638627304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:24:12.908083 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:55918.service - OpenSSH per-connection server daemon (10.0.0.1:55918). Oct 30 13:24:12.952066 containerd[1604]: time="2025-10-30T13:24:12.952016158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:12.962926 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 55918 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:12.964609 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:12.968926 systemd-logind[1582]: New session 11 of user core. Oct 30 13:24:12.980252 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 13:24:12.999176 containerd[1604]: time="2025-10-30T13:24:12.999062426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:24:12.999329 containerd[1604]: time="2025-10-30T13:24:12.999079229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:24:12.999902 kubelet[2746]: E1030 13:24:12.999657 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:24:12.999902 kubelet[2746]: E1030 13:24:12.999733 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:24:12.999902 kubelet[2746]: E1030 13:24:12.999966 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b6f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b486d6f4-rp89s_calico-system(6ec85b99-8a3f-41cb-bb15-d4714acb86dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:13.000343 containerd[1604]: time="2025-10-30T13:24:13.000321122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:24:13.001103 kubelet[2746]: E1030 13:24:13.001065 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:24:13.054596 sshd[5078]: Connection closed by 10.0.0.1 port 55918 Oct 30 13:24:13.054905 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:13.066458 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:55918.service: Deactivated successfully. Oct 30 13:24:13.068372 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 13:24:13.069260 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Oct 30 13:24:13.071832 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:55922.service - OpenSSH per-connection server daemon (10.0.0.1:55922). Oct 30 13:24:13.073184 systemd-logind[1582]: Removed session 11. Oct 30 13:24:13.135031 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 55922 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:13.136698 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:13.141013 systemd-logind[1582]: New session 12 of user core. Oct 30 13:24:13.155239 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 13:24:13.266267 sshd[5096]: Connection closed by 10.0.0.1 port 55922 Oct 30 13:24:13.268514 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:13.281223 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:55922.service: Deactivated successfully. Oct 30 13:24:13.287851 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 13:24:13.290176 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Oct 30 13:24:13.297220 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:55930.service - OpenSSH per-connection server daemon (10.0.0.1:55930). Oct 30 13:24:13.298190 systemd-logind[1582]: Removed session 12. Oct 30 13:24:13.353526 containerd[1604]: time="2025-10-30T13:24:13.353460414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:13.368275 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 55930 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:13.370164 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:13.375177 systemd-logind[1582]: New session 13 of user core. Oct 30 13:24:13.385242 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 13:24:13.517065 containerd[1604]: time="2025-10-30T13:24:13.516879541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:24:13.517065 containerd[1604]: time="2025-10-30T13:24:13.516939168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:24:13.517520 kubelet[2746]: E1030 13:24:13.517242 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:24:13.517520 kubelet[2746]: E1030 13:24:13.517312 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:24:13.517796 containerd[1604]: time="2025-10-30T13:24:13.517768467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:24:13.517839 kubelet[2746]: E1030 13:24:13.517759 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784cfbd5cb-gkdhk_calico-system(786a9200-a386-410f-ada7-d8428b9d68f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:13.519041 kubelet[2746]: E1030 13:24:13.518972 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784cfbd5cb-gkdhk" podUID="786a9200-a386-410f-ada7-d8428b9d68f8" Oct 30 13:24:13.894576 sshd[5111]: Connection closed by 10.0.0.1 port 55930 Oct 30 13:24:13.895309 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:13.901516 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:55930.service: Deactivated successfully. Oct 30 13:24:13.904723 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 13:24:13.906365 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Oct 30 13:24:13.907905 systemd-logind[1582]: Removed session 13. Oct 30 13:24:14.042114 containerd[1604]: time="2025-10-30T13:24:14.042044234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:14.132870 containerd[1604]: time="2025-10-30T13:24:14.132772950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:14.133087 containerd[1604]: time="2025-10-30T13:24:14.132858557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:24:14.133263 kubelet[2746]: E1030 13:24:14.133197 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:14.133263 kubelet[2746]: E1030 13:24:14.133264 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:14.133745 kubelet[2746]: E1030 13:24:14.133416 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkpl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9656b5c49-956xq_calico-apiserver(e6887d28-f8bd-4d4f-b72b-6f4de4992ef6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:14.134687 kubelet[2746]: E1030 13:24:14.134622 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:14.257002 containerd[1604]: time="2025-10-30T13:24:14.256858451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:24:14.702302 containerd[1604]: time="2025-10-30T13:24:14.702229101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:14.732729 containerd[1604]: time="2025-10-30T13:24:14.732650457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:14.732930 containerd[1604]: time="2025-10-30T13:24:14.732742127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:24:14.733160 kubelet[2746]: E1030 13:24:14.733078 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:24:14.733257 kubelet[2746]: E1030 13:24:14.733171 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:24:14.733982 kubelet[2746]: E1030 13:24:14.733915 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nwng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jhg5h_calico-system(9d8a307a-f326-4a48-b1d2-dfbdf32a2608): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:14.735219 kubelet[2746]: E1030 13:24:14.735145 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:24:15.256277 containerd[1604]: time="2025-10-30T13:24:15.256217028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:24:15.677155 containerd[1604]: time="2025-10-30T13:24:15.677052859Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:15.683381 containerd[1604]: time="2025-10-30T13:24:15.683346518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:24:15.683582 containerd[1604]: time="2025-10-30T13:24:15.683452055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:15.683655 kubelet[2746]: E1030 13:24:15.683578 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:15.683655 kubelet[2746]: E1030 13:24:15.683633 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:15.684039 kubelet[2746]: E1030 13:24:15.683776 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hptj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9656b5c49-8rh5p_calico-apiserver(789594bd-b894-4820-937c-e5586bffb18c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:15.684953 kubelet[2746]: E1030 13:24:15.684920 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" podUID="789594bd-b894-4820-937c-e5586bffb18c" Oct 30 13:24:18.256801 containerd[1604]: time="2025-10-30T13:24:18.256681346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:24:18.771708 containerd[1604]: time="2025-10-30T13:24:18.771650028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:18.773143 containerd[1604]: time="2025-10-30T13:24:18.773074792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:24:18.773213 containerd[1604]: time="2025-10-30T13:24:18.773191751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:24:18.773444 kubelet[2746]: E1030 13:24:18.773393 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:24:18.774347 kubelet[2746]: E1030 13:24:18.773462 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:24:18.774347 kubelet[2746]: E1030 13:24:18.773669 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xptf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:18.776158 containerd[1604]: time="2025-10-30T13:24:18.776069024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:24:18.914629 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:33178.service - OpenSSH per-connection server daemon (10.0.0.1:33178). Oct 30 13:24:18.971719 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 33178 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:18.973474 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:18.978083 systemd-logind[1582]: New session 14 of user core. Oct 30 13:24:18.987264 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 13:24:19.058168 sshd[5139]: Connection closed by 10.0.0.1 port 33178 Oct 30 13:24:19.058501 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:19.062658 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:33178.service: Deactivated successfully. Oct 30 13:24:19.064945 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 13:24:19.066808 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Oct 30 13:24:19.068067 systemd-logind[1582]: Removed session 14. Oct 30 13:24:19.159991 containerd[1604]: time="2025-10-30T13:24:19.159936730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:19.161216 containerd[1604]: time="2025-10-30T13:24:19.161162564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:24:19.161311 containerd[1604]: time="2025-10-30T13:24:19.161261106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:24:19.161428 kubelet[2746]: E1030 13:24:19.161375 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:24:19.161496 kubelet[2746]: E1030 13:24:19.161432 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:24:19.161611 kubelet[2746]: E1030 13:24:19.161552 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xptf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:19.162821 kubelet[2746]: E1030 13:24:19.162763 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:24:23.256508 kubelet[2746]: E1030 13:24:23.256401 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:24:24.075252 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:33188.service - OpenSSH per-connection server daemon (10.0.0.1:33188). Oct 30 13:24:24.131133 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 33188 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:24.132842 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:24.137797 systemd-logind[1582]: New session 15 of user core. Oct 30 13:24:24.154287 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 13:24:24.224463 sshd[5157]: Connection closed by 10.0.0.1 port 33188 Oct 30 13:24:24.224802 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:24.229863 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:33188.service: Deactivated successfully. Oct 30 13:24:24.231982 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 13:24:24.233005 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Oct 30 13:24:24.234400 systemd-logind[1582]: Removed session 15. Oct 30 13:24:26.256711 kubelet[2746]: E1030 13:24:26.256642 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784cfbd5cb-gkdhk" podUID="786a9200-a386-410f-ada7-d8428b9d68f8" Oct 30 13:24:28.256610 kubelet[2746]: E1030 13:24:28.256543 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:28.256610 kubelet[2746]: E1030 13:24:28.256563 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:24:29.242538 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:35284.service - OpenSSH per-connection server daemon (10.0.0.1:35284). Oct 30 13:24:29.256945 kubelet[2746]: E1030 13:24:29.256240 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" podUID="789594bd-b894-4820-937c-e5586bffb18c" Oct 30 13:24:29.305664 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 35284 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:29.307642 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:29.312649 systemd-logind[1582]: New session 16 of user core. Oct 30 13:24:29.319286 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 13:24:29.393562 sshd[5184]: Connection closed by 10.0.0.1 port 35284 Oct 30 13:24:29.393896 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:29.399043 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:35284.service: Deactivated successfully. Oct 30 13:24:29.401796 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 13:24:29.402778 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Oct 30 13:24:29.404079 systemd-logind[1582]: Removed session 16. Oct 30 13:24:33.257502 kubelet[2746]: E1030 13:24:33.257345 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:24:34.086262 containerd[1604]: time="2025-10-30T13:24:34.086202343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30c419851e2ccd4a9edb2e6339098931e8d8a03d776dffd6bc21cf0e0f3d50cb\" id:\"d932486e752bcf75d6400b4aa27bdd218e33df5d2006dab683792f2ec60c0bef\" pid:5212 exited_at:{seconds:1761830674 nanos:85795548}" Oct 30 13:24:34.090521 kubelet[2746]: E1030 13:24:34.090492 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:34.412033 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:35296.service - OpenSSH per-connection server daemon (10.0.0.1:35296). Oct 30 13:24:34.467764 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 35296 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:34.469255 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:34.473518 systemd-logind[1582]: New session 17 of user core. Oct 30 13:24:34.484245 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 13:24:34.565682 sshd[5228]: Connection closed by 10.0.0.1 port 35296 Oct 30 13:24:34.566232 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:34.578012 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:35296.service: Deactivated successfully. Oct 30 13:24:34.580227 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 13:24:34.580998 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Oct 30 13:24:34.584252 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:35310.service - OpenSSH per-connection server daemon (10.0.0.1:35310). Oct 30 13:24:34.584921 systemd-logind[1582]: Removed session 17. Oct 30 13:24:34.647782 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 35310 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:34.649422 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:34.654367 systemd-logind[1582]: New session 18 of user core. Oct 30 13:24:34.664273 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 13:24:34.929667 sshd[5244]: Connection closed by 10.0.0.1 port 35310 Oct 30 13:24:34.930918 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:34.943879 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:35310.service: Deactivated successfully. Oct 30 13:24:34.945798 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 13:24:34.946596 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Oct 30 13:24:34.950226 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:35320.service - OpenSSH per-connection server daemon (10.0.0.1:35320). Oct 30 13:24:34.954243 systemd-logind[1582]: Removed session 18. Oct 30 13:24:35.022966 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 35320 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:35.024520 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:35.029743 systemd-logind[1582]: New session 19 of user core. Oct 30 13:24:35.037268 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 13:24:35.658782 sshd[5259]: Connection closed by 10.0.0.1 port 35320 Oct 30 13:24:35.659161 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:35.672794 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:35320.service: Deactivated successfully. Oct 30 13:24:35.678773 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 13:24:35.682205 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Oct 30 13:24:35.686095 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:35336.service - OpenSSH per-connection server daemon (10.0.0.1:35336). Oct 30 13:24:35.688492 systemd-logind[1582]: Removed session 19. Oct 30 13:24:35.744549 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 35336 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:35.745947 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:35.750669 systemd-logind[1582]: New session 20 of user core. Oct 30 13:24:35.765268 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 13:24:35.935459 sshd[5284]: Connection closed by 10.0.0.1 port 35336 Oct 30 13:24:35.937358 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:35.949688 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:35336.service: Deactivated successfully. Oct 30 13:24:35.952041 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 13:24:35.952831 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Oct 30 13:24:35.956587 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:35338.service - OpenSSH per-connection server daemon (10.0.0.1:35338). Oct 30 13:24:35.957285 systemd-logind[1582]: Removed session 20. Oct 30 13:24:36.011780 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 35338 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:36.013208 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:36.018235 systemd-logind[1582]: New session 21 of user core. Oct 30 13:24:36.031287 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 13:24:36.114361 sshd[5298]: Connection closed by 10.0.0.1 port 35338 Oct 30 13:24:36.114679 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:36.119552 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:35338.service: Deactivated successfully. Oct 30 13:24:36.121700 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 13:24:36.124950 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Oct 30 13:24:36.126050 systemd-logind[1582]: Removed session 21. Oct 30 13:24:36.255838 kubelet[2746]: E1030 13:24:36.255696 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:36.256414 kubelet[2746]: E1030 13:24:36.256331 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:37.257776 containerd[1604]: time="2025-10-30T13:24:37.257714906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:24:37.654884 containerd[1604]: time="2025-10-30T13:24:37.654805023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:37.673301 containerd[1604]: time="2025-10-30T13:24:37.673239578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:24:37.673444 containerd[1604]: time="2025-10-30T13:24:37.673320674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:24:37.673533 kubelet[2746]: E1030 13:24:37.673478 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:24:37.673913 kubelet[2746]: E1030 13:24:37.673537 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:24:37.673913 kubelet[2746]: E1030 13:24:37.673668 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b6f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b486d6f4-rp89s_calico-system(6ec85b99-8a3f-41cb-bb15-d4714acb86dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:37.674894 kubelet[2746]: E1030 13:24:37.674841 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:24:38.257330 containerd[1604]: time="2025-10-30T13:24:38.256988399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:24:38.594955 containerd[1604]: time="2025-10-30T13:24:38.594893833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:38.596297 containerd[1604]: time="2025-10-30T13:24:38.596241137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:24:38.596355 containerd[1604]: time="2025-10-30T13:24:38.596331050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:24:38.596935 kubelet[2746]: E1030 13:24:38.596587 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:24:38.596935 kubelet[2746]: E1030 13:24:38.596644 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:24:38.596935 kubelet[2746]: E1030 13:24:38.596782 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12669cb952d94aca8038ce3b6ca7fece,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2rdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784cfbd5cb-gkdhk_calico-system(786a9200-a386-410f-ada7-d8428b9d68f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:38.599171 containerd[1604]: time="2025-10-30T13:24:38.599100082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:24:39.171880 containerd[1604]: time="2025-10-30T13:24:39.171816354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:39.175275 containerd[1604]: time="2025-10-30T13:24:39.175216928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:24:39.175338 containerd[1604]: time="2025-10-30T13:24:39.175312963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:24:39.175551 kubelet[2746]: E1030 13:24:39.175487 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:24:39.175968 kubelet[2746]: E1030 13:24:39.175556 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:24:39.175968 kubelet[2746]: E1030 13:24:39.175708 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784cfbd5cb-gkdhk_calico-system(786a9200-a386-410f-ada7-d8428b9d68f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:39.176980 kubelet[2746]: E1030 13:24:39.176885 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784cfbd5cb-gkdhk" podUID="786a9200-a386-410f-ada7-d8428b9d68f8" Oct 30 13:24:39.256725 containerd[1604]: time="2025-10-30T13:24:39.256653546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:24:39.600809 containerd[1604]: time="2025-10-30T13:24:39.600733087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:39.716784 containerd[1604]: time="2025-10-30T13:24:39.716699063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:39.716987 containerd[1604]: time="2025-10-30T13:24:39.716765421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:24:39.717201 kubelet[2746]: E1030 13:24:39.717092 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:24:39.717276 kubelet[2746]: E1030 13:24:39.717199 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:24:39.718400 kubelet[2746]: E1030 13:24:39.718328 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nwng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jhg5h_calico-system(9d8a307a-f326-4a48-b1d2-dfbdf32a2608): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:39.719579 kubelet[2746]: E1030 13:24:39.719518 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:24:40.256647 containerd[1604]: time="2025-10-30T13:24:40.256581940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:24:40.583854 containerd[1604]: time="2025-10-30T13:24:40.583780594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:40.585145 containerd[1604]: time="2025-10-30T13:24:40.585053542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:24:40.585224 containerd[1604]: time="2025-10-30T13:24:40.585090343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:40.585443 kubelet[2746]: E1030 13:24:40.585378 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:40.585443 kubelet[2746]: E1030 13:24:40.585439 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:40.585901 kubelet[2746]: E1030 13:24:40.585578 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkpl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9656b5c49-956xq_calico-apiserver(e6887d28-f8bd-4d4f-b72b-6f4de4992ef6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:40.586765 kubelet[2746]: E1030 13:24:40.586734 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:41.128270 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:60094.service - OpenSSH per-connection server daemon (10.0.0.1:60094). Oct 30 13:24:41.204998 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 60094 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:41.206412 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:41.211243 systemd-logind[1582]: New session 22 of user core. Oct 30 13:24:41.219290 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 13:24:41.309961 sshd[5316]: Connection closed by 10.0.0.1 port 60094 Oct 30 13:24:41.310313 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:41.315437 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:60094.service: Deactivated successfully. Oct 30 13:24:41.317571 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 13:24:41.318517 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Oct 30 13:24:41.319919 systemd-logind[1582]: Removed session 22. Oct 30 13:24:43.257324 containerd[1604]: time="2025-10-30T13:24:43.257264084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:24:43.581984 containerd[1604]: time="2025-10-30T13:24:43.581927099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:43.582956 containerd[1604]: time="2025-10-30T13:24:43.582899827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:24:43.583167 containerd[1604]: time="2025-10-30T13:24:43.582954131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:24:43.583205 kubelet[2746]: E1030 13:24:43.583040 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:43.583205 kubelet[2746]: E1030 13:24:43.583077 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:24:43.583604 kubelet[2746]: E1030 13:24:43.583216 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hptj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9656b5c49-8rh5p_calico-apiserver(789594bd-b894-4820-937c-e5586bffb18c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:43.584609 kubelet[2746]: E1030 13:24:43.584566 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-8rh5p" podUID="789594bd-b894-4820-937c-e5586bffb18c" Oct 30 13:24:46.255249 kubelet[2746]: E1030 13:24:46.255177 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:46.256539 containerd[1604]: time="2025-10-30T13:24:46.256092794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:24:46.324665 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). Oct 30 13:24:46.385789 sshd[5337]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:46.387183 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:46.392386 systemd-logind[1582]: New session 23 of user core. Oct 30 13:24:46.402264 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 13:24:46.471343 sshd[5340]: Connection closed by 10.0.0.1 port 52190 Oct 30 13:24:46.471655 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:46.475929 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:52190.service: Deactivated successfully. Oct 30 13:24:46.478228 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 13:24:46.479377 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Oct 30 13:24:46.480589 systemd-logind[1582]: Removed session 23. Oct 30 13:24:47.254927 kubelet[2746]: E1030 13:24:47.254874 2746 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:24:47.619925 containerd[1604]: time="2025-10-30T13:24:47.619850572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:47.621033 containerd[1604]: time="2025-10-30T13:24:47.620984515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:24:47.621089 containerd[1604]: time="2025-10-30T13:24:47.621030253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:24:47.621346 kubelet[2746]: E1030 13:24:47.621278 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:24:47.621615 kubelet[2746]: E1030 13:24:47.621357 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:24:47.621615 kubelet[2746]: E1030 13:24:47.621512 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xptf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:47.623390 containerd[1604]: time="2025-10-30T13:24:47.623348969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:24:47.982897 containerd[1604]: time="2025-10-30T13:24:47.982756125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:24:47.984283 containerd[1604]: time="2025-10-30T13:24:47.984216495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:24:47.984365 containerd[1604]: time="2025-10-30T13:24:47.984269818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:24:47.984484 kubelet[2746]: E1030 13:24:47.984427 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:24:47.984565 kubelet[2746]: E1030 13:24:47.984487 2746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:24:47.984670 kubelet[2746]: E1030 13:24:47.984620 2746 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xptf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2t6tn_calico-system(174a8f7f-c864-44be-b45c-d548b2df28c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:24:47.985865 kubelet[2746]: E1030 13:24:47.985803 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2t6tn" podUID="174a8f7f-c864-44be-b45c-d548b2df28c8" Oct 30 13:24:49.256916 kubelet[2746]: E1030 13:24:49.256853 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b486d6f4-rp89s" podUID="6ec85b99-8a3f-41cb-bb15-d4714acb86dc" Oct 30 13:24:51.489390 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:52206.service - OpenSSH per-connection server daemon (10.0.0.1:52206). Oct 30 13:24:51.549490 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 52206 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:51.550756 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:51.554968 systemd-logind[1582]: New session 24 of user core. Oct 30 13:24:51.565266 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 13:24:51.635093 sshd[5356]: Connection closed by 10.0.0.1 port 52206 Oct 30 13:24:51.635411 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:51.640699 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:52206.service: Deactivated successfully. Oct 30 13:24:51.642943 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 13:24:51.643968 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Oct 30 13:24:51.645230 systemd-logind[1582]: Removed session 24. Oct 30 13:24:54.256236 kubelet[2746]: E1030 13:24:54.255663 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jhg5h" podUID="9d8a307a-f326-4a48-b1d2-dfbdf32a2608" Oct 30 13:24:54.256975 kubelet[2746]: E1030 13:24:54.256922 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784cfbd5cb-gkdhk" podUID="786a9200-a386-410f-ada7-d8428b9d68f8" Oct 30 13:24:55.261770 kubelet[2746]: E1030 13:24:55.261278 2746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9656b5c49-956xq" podUID="e6887d28-f8bd-4d4f-b72b-6f4de4992ef6" Oct 30 13:24:56.648675 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:36490.service - OpenSSH per-connection server daemon (10.0.0.1:36490). Oct 30 13:24:56.737584 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 36490 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:24:56.739781 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:24:56.744673 systemd-logind[1582]: New session 25 of user core. Oct 30 13:24:56.755269 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 13:24:56.879152 sshd[5374]: Connection closed by 10.0.0.1 port 36490 Oct 30 13:24:56.880384 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Oct 30 13:24:56.885989 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Oct 30 13:24:56.886770 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:36490.service: Deactivated successfully. Oct 30 13:24:56.891004 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 13:24:56.896549 systemd-logind[1582]: Removed session 25.