Oct 29 20:39:11.525616 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 18:33:09 -00 2025 Oct 29 20:39:11.525676 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=44244d247275a195a4f17c05cb1358191d6e164275fceaedb27ff0bc5031a050 Oct 29 20:39:11.525696 kernel: BIOS-provided physical RAM map: Oct 29 20:39:11.525708 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 29 20:39:11.525719 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 29 20:39:11.525734 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 29 20:39:11.525748 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 29 20:39:11.525760 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 29 20:39:11.525776 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 29 20:39:11.525787 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 29 20:39:11.525812 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 20:39:11.525826 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 29 20:39:11.525838 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 20:39:11.525851 kernel: NX (Execute Disable) protection: active Oct 29 20:39:11.525866 kernel: APIC: Static calls initialized Oct 29 20:39:11.525891 kernel: SMBIOS 2.8 present. Oct 29 20:39:11.525908 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 29 20:39:11.525919 kernel: DMI: Memory slots populated: 1/1 Oct 29 20:39:11.525929 kernel: Hypervisor detected: KVM Oct 29 20:39:11.525938 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 29 20:39:11.525951 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 20:39:11.525961 kernel: kvm-clock: using sched offset of 5645292799 cycles Oct 29 20:39:11.525971 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 20:39:11.525981 kernel: tsc: Detected 2794.748 MHz processor Oct 29 20:39:11.526007 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 20:39:11.526021 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 20:39:11.526034 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 29 20:39:11.526045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 29 20:39:11.526066 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 20:39:11.526191 kernel: Using GB pages for direct mapping Oct 29 20:39:11.526203 kernel: ACPI: Early table checksum verification disabled Oct 29 20:39:11.526214 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 29 20:39:11.526239 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526264 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526275 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526285 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 29 20:39:11.526296 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526307 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526318 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526344 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 20:39:11.526367 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 29 20:39:11.526378 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 29 20:39:11.526389 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 29 20:39:11.526401 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 29 20:39:11.526423 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 29 20:39:11.526437 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 29 20:39:11.526463 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 29 20:39:11.526477 kernel: No NUMA configuration found Oct 29 20:39:11.526488 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 29 20:39:11.526498 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 29 20:39:11.526520 kernel: Zone ranges: Oct 29 20:39:11.526531 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 20:39:11.526542 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 29 20:39:11.526553 kernel: Normal empty Oct 29 20:39:11.526569 kernel: Device empty Oct 29 20:39:11.526580 kernel: Movable zone start for each node Oct 29 20:39:11.526591 kernel: Early memory node ranges Oct 29 20:39:11.526645 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 29 20:39:11.526674 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 29 20:39:11.526687 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 29 20:39:11.526700 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 20:39:11.526714 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 20:39:11.526731 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 29 20:39:11.526749 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 20:39:11.526763 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 20:39:11.526785 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 20:39:11.526796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 20:39:11.526810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 20:39:11.526822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 20:39:11.526834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 20:39:11.526846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 20:39:11.526858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 20:39:11.526878 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 20:39:11.526889 kernel: TSC deadline timer available Oct 29 20:39:11.526900 kernel: CPU topo: Max. logical packages: 1 Oct 29 20:39:11.526910 kernel: CPU topo: Max. logical dies: 1 Oct 29 20:39:11.526921 kernel: CPU topo: Max. dies per package: 1 Oct 29 20:39:11.526932 kernel: CPU topo: Max. threads per core: 1 Oct 29 20:39:11.526943 kernel: CPU topo: Num. cores per package: 4 Oct 29 20:39:11.526954 kernel: CPU topo: Num. threads per package: 4 Oct 29 20:39:11.527016 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 20:39:11.527028 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 20:39:11.527039 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 20:39:11.527077 kernel: kvm-guest: setup PV sched yield Oct 29 20:39:11.527088 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 29 20:39:11.527099 kernel: Booting paravirtualized kernel on KVM Oct 29 20:39:11.527110 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 20:39:11.527131 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 20:39:11.527142 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 20:39:11.527152 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 20:39:11.527163 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 20:39:11.527179 kernel: kvm-guest: PV spinlocks enabled Oct 29 20:39:11.527190 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 20:39:11.527203 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=44244d247275a195a4f17c05cb1358191d6e164275fceaedb27ff0bc5031a050 Oct 29 20:39:11.527223 kernel: random: crng init done Oct 29 20:39:11.527234 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 20:39:11.527256 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 20:39:11.527267 kernel: Fallback order for Node 0: 0 Oct 29 20:39:11.527278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 29 20:39:11.527289 kernel: Policy zone: DMA32 Oct 29 20:39:11.527300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 20:39:11.527319 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 20:39:11.527330 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 20:39:11.527341 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 20:39:11.527351 kernel: Dynamic Preempt: voluntary Oct 29 20:39:11.527362 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 20:39:11.527374 kernel: rcu: RCU event tracing is enabled. Oct 29 20:39:11.527385 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 20:39:11.527396 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 20:39:11.527419 kernel: Rude variant of Tasks RCU enabled. Oct 29 20:39:11.527430 kernel: Tracing variant of Tasks RCU enabled. Oct 29 20:39:11.527441 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 20:39:11.527466 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 20:39:11.527478 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 20:39:11.527489 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 20:39:11.527499 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 20:39:11.527522 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 20:39:11.527534 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 20:39:11.527570 kernel: Console: colour VGA+ 80x25 Oct 29 20:39:11.527589 kernel: printk: legacy console [ttyS0] enabled Oct 29 20:39:11.527601 kernel: ACPI: Core revision 20240827 Oct 29 20:39:11.527612 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 20:39:11.527624 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 20:39:11.527636 kernel: x2apic enabled Oct 29 20:39:11.527648 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 20:39:11.527673 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 20:39:11.527686 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 20:39:11.527697 kernel: kvm-guest: setup PV IPIs Oct 29 20:39:11.527708 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 20:39:11.527729 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 20:39:11.527740 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 20:39:11.527752 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 20:39:11.527763 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 20:39:11.527775 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 20:39:11.527787 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 20:39:11.527798 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 20:39:11.527817 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 20:39:11.527828 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 20:39:11.527839 kernel: active return thunk: retbleed_return_thunk Oct 29 20:39:11.527850 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 20:39:11.527870 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 20:39:11.527882 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 20:39:11.527895 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 20:39:11.527919 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 20:39:11.527931 kernel: active return thunk: srso_return_thunk Oct 29 20:39:11.527942 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 20:39:11.527954 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 20:39:11.527965 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 20:39:11.527976 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 20:39:11.527987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 20:39:11.528008 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 20:39:11.528019 kernel: Freeing SMP alternatives memory: 32K Oct 29 20:39:11.528031 kernel: pid_max: default: 32768 minimum: 301 Oct 29 20:39:11.528042 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 20:39:11.528053 kernel: landlock: Up and running. Oct 29 20:39:11.528064 kernel: SELinux: Initializing. Oct 29 20:39:11.528079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 20:39:11.528098 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 20:39:11.528109 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 20:39:11.528121 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 20:39:11.528132 kernel: ... version: 0 Oct 29 20:39:11.528143 kernel: ... bit width: 48 Oct 29 20:39:11.528153 kernel: ... generic registers: 6 Oct 29 20:39:11.528165 kernel: ... value mask: 0000ffffffffffff Oct 29 20:39:11.528190 kernel: ... max period: 00007fffffffffff Oct 29 20:39:11.528201 kernel: ... fixed-purpose events: 0 Oct 29 20:39:11.528212 kernel: ... event mask: 000000000000003f Oct 29 20:39:11.528223 kernel: signal: max sigframe size: 1776 Oct 29 20:39:11.528234 kernel: rcu: Hierarchical SRCU implementation. Oct 29 20:39:11.528255 kernel: rcu: Max phase no-delay instances is 400. Oct 29 20:39:11.528267 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 20:39:11.528285 kernel: smp: Bringing up secondary CPUs ... Oct 29 20:39:11.528297 kernel: smpboot: x86: Booting SMP configuration: Oct 29 20:39:11.528308 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 20:39:11.528319 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 20:39:11.528330 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 20:39:11.528342 kernel: Memory: 2447344K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15936K init, 2108K bss, 118472K reserved, 0K cma-reserved) Oct 29 20:39:11.528388 kernel: devtmpfs: initialized Oct 29 20:39:11.528409 kernel: x86/mm: Memory block size: 128MB Oct 29 20:39:11.528421 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 20:39:11.528432 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 20:39:11.528444 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 20:39:11.528472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 20:39:11.528483 kernel: audit: initializing netlink subsys (disabled) Oct 29 20:39:11.528494 kernel: audit: type=2000 audit(1761770348.141:1): state=initialized audit_enabled=0 res=1 Oct 29 20:39:11.528516 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 20:39:11.528527 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 20:39:11.528539 kernel: cpuidle: using governor menu Oct 29 20:39:11.528549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 20:39:11.528560 kernel: dca service started, version 1.12.1 Oct 29 20:39:11.528571 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 29 20:39:11.528583 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 29 20:39:11.528603 kernel: PCI: Using configuration type 1 for base access Oct 29 20:39:11.528614 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 20:39:11.528626 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 20:39:11.528638 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 20:39:11.528649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 20:39:11.528661 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 20:39:11.528673 kernel: ACPI: Added _OSI(Module Device) Oct 29 20:39:11.528687 kernel: ACPI: Added _OSI(Processor Device) Oct 29 20:39:11.528734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 20:39:11.528745 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 20:39:11.528756 kernel: ACPI: Interpreter enabled Oct 29 20:39:11.528766 kernel: ACPI: PM: (supports S0 S3 S5) Oct 29 20:39:11.528777 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 20:39:11.528788 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 20:39:11.528799 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 20:39:11.528819 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 20:39:11.528830 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 20:39:11.529155 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 20:39:11.529388 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 20:39:11.529617 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 20:39:11.529734 kernel: PCI host bridge to bus 0000:00 Oct 29 20:39:11.529947 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 20:39:11.530174 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 20:39:11.530517 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 20:39:11.530721 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 29 20:39:11.530910 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 29 20:39:11.531190 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 29 20:39:11.531397 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 20:39:11.531720 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 20:39:11.531991 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 20:39:11.532241 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 29 20:39:11.532523 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 29 20:39:11.532743 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 29 20:39:11.532951 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 20:39:11.533179 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 20:39:11.533406 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 29 20:39:11.533683 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 29 20:39:11.533952 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 29 20:39:11.534188 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 20:39:11.534424 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 29 20:39:11.534687 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 29 20:39:11.534920 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 29 20:39:11.535199 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 20:39:11.535569 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 29 20:39:11.535782 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 29 20:39:11.535987 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 29 20:39:11.536194 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 29 20:39:11.536426 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 20:39:11.536662 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 20:39:11.536939 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 20:39:11.537155 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 29 20:39:11.537414 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 29 20:39:11.537654 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 20:39:11.537866 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 29 20:39:11.537931 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 20:39:11.537944 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 20:39:11.537956 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 20:39:11.537968 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 20:39:11.537980 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 20:39:11.537992 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 20:39:11.538004 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 20:39:11.538094 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 20:39:11.538107 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 20:39:11.538119 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 20:39:11.538131 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 20:39:11.538144 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 20:39:11.538156 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 20:39:11.538169 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 20:39:11.538191 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 20:39:11.538204 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 20:39:11.538216 kernel: iommu: Default domain type: Translated Oct 29 20:39:11.538227 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 20:39:11.538239 kernel: PCI: Using ACPI for IRQ routing Oct 29 20:39:11.538266 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 20:39:11.538278 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 29 20:39:11.538302 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 29 20:39:11.538590 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 20:39:11.538812 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 20:39:11.539050 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 20:39:11.539067 kernel: vgaarb: loaded Oct 29 20:39:11.539080 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 20:39:11.539093 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 20:39:11.539120 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 20:39:11.539133 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 20:39:11.539145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 20:39:11.539158 kernel: pnp: PnP ACPI init Oct 29 20:39:11.539382 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 29 20:39:11.539398 kernel: pnp: PnP ACPI: found 6 devices Oct 29 20:39:11.539410 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 20:39:11.539433 kernel: NET: Registered PF_INET protocol family Oct 29 20:39:11.539445 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 20:39:11.539474 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 20:39:11.539485 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 20:39:11.539496 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 20:39:11.539507 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 20:39:11.539528 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 20:39:11.539539 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 20:39:11.539550 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 20:39:11.539561 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 20:39:11.539571 kernel: NET: Registered PF_XDP protocol family Oct 29 20:39:11.539766 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 20:39:11.539958 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 20:39:11.540168 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 20:39:11.540409 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 29 20:39:11.540609 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 29 20:39:11.540820 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 29 20:39:11.540837 kernel: PCI: CLS 0 bytes, default 64 Oct 29 20:39:11.540849 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 20:39:11.540860 kernel: Initialise system trusted keyrings Oct 29 20:39:11.540924 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 20:39:11.540936 kernel: Key type asymmetric registered Oct 29 20:39:11.540948 kernel: Asymmetric key parser 'x509' registered Oct 29 20:39:11.540959 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 20:39:11.540971 kernel: io scheduler mq-deadline registered Oct 29 20:39:11.540982 kernel: io scheduler kyber registered Oct 29 20:39:11.540994 kernel: io scheduler bfq registered Oct 29 20:39:11.541015 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 20:39:11.541028 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 20:39:11.541039 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 20:39:11.541051 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 20:39:11.541062 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 20:39:11.541073 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 20:39:11.541085 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 20:39:11.541107 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 20:39:11.541119 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 20:39:11.541348 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 20:39:11.541368 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 20:39:11.541599 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 20:39:11.541807 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T20:39:09 UTC (1761770349) Oct 29 20:39:11.542016 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 29 20:39:11.542131 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 20:39:11.542143 kernel: NET: Registered PF_INET6 protocol family Oct 29 20:39:11.542155 kernel: Segment Routing with IPv6 Oct 29 20:39:11.542166 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 20:39:11.542179 kernel: NET: Registered PF_PACKET protocol family Oct 29 20:39:11.542191 kernel: Key type dns_resolver registered Oct 29 20:39:11.542202 kernel: IPI shorthand broadcast: enabled Oct 29 20:39:11.542228 kernel: sched_clock: Marking stable (1320002461, 280957463)->(1657497679, -56537755) Oct 29 20:39:11.542240 kernel: registered taskstats version 1 Oct 29 20:39:11.542264 kernel: Loading compiled-in X.509 certificates Oct 29 20:39:11.542276 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 2bb7a9eacfaffd4e5c22f4ef53d9af4a7e6e6c84' Oct 29 20:39:11.542288 kernel: Demotion targets for Node 0: null Oct 29 20:39:11.542300 kernel: Key type .fscrypt registered Oct 29 20:39:11.542312 kernel: Key type fscrypt-provisioning registered Oct 29 20:39:11.542336 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 20:39:11.542379 kernel: ima: Allocated hash algorithm: sha1 Oct 29 20:39:11.542396 kernel: ima: No architecture policies found Oct 29 20:39:11.542408 kernel: clk: Disabling unused clocks Oct 29 20:39:11.542419 kernel: Freeing unused kernel image (initmem) memory: 15936K Oct 29 20:39:11.542441 kernel: Write protecting the kernel read-only data: 45056k Oct 29 20:39:11.542473 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 29 20:39:11.542497 kernel: Run /init as init process Oct 29 20:39:11.542508 kernel: with arguments: Oct 29 20:39:11.542519 kernel: /init Oct 29 20:39:11.542530 kernel: with environment: Oct 29 20:39:11.542541 kernel: HOME=/ Oct 29 20:39:11.542552 kernel: TERM=linux Oct 29 20:39:11.542563 kernel: SCSI subsystem initialized Oct 29 20:39:11.542575 kernel: libata version 3.00 loaded. Oct 29 20:39:11.542809 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 20:39:11.542913 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 20:39:11.543202 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 20:39:11.543470 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 20:39:11.543741 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 20:39:11.544027 kernel: scsi host0: ahci Oct 29 20:39:11.544302 kernel: scsi host1: ahci Oct 29 20:39:11.544554 kernel: scsi host2: ahci Oct 29 20:39:11.544822 kernel: scsi host3: ahci Oct 29 20:39:11.545062 kernel: scsi host4: ahci Oct 29 20:39:11.545596 kernel: scsi host5: ahci Oct 29 20:39:11.545619 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 29 20:39:11.545633 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 29 20:39:11.545645 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 29 20:39:11.545657 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 29 20:39:11.545670 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 29 20:39:11.545741 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 29 20:39:11.545787 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 20:39:11.545832 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 20:39:11.545850 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 20:39:11.545875 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 20:39:11.545888 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 20:39:11.545900 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 20:39:11.545913 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 20:39:11.545937 kernel: ata3.00: applying bridge limits Oct 29 20:39:11.545950 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 20:39:11.545963 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 20:39:11.545975 kernel: ata3.00: configured for UDMA/100 Oct 29 20:39:11.546271 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 20:39:11.546528 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 20:39:11.546750 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 20:39:11.546768 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 20:39:11.546780 kernel: GPT:16515071 != 27000831 Oct 29 20:39:11.546792 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 20:39:11.546803 kernel: GPT:16515071 != 27000831 Oct 29 20:39:11.546814 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 20:39:11.546826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 20:39:11.547073 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 20:39:11.547092 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 20:39:11.547328 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 20:39:11.547346 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 20:39:11.547359 kernel: device-mapper: uevent: version 1.0.3 Oct 29 20:39:11.547371 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 20:39:11.547384 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 20:39:11.547411 kernel: raid6: avx2x4 gen() 23310 MB/s Oct 29 20:39:11.547424 kernel: raid6: avx2x2 gen() 28130 MB/s Oct 29 20:39:11.547437 kernel: raid6: avx2x1 gen() 24184 MB/s Oct 29 20:39:11.547464 kernel: raid6: using algorithm avx2x2 gen() 28130 MB/s Oct 29 20:39:11.547489 kernel: raid6: .... xor() 18448 MB/s, rmw enabled Oct 29 20:39:11.547502 kernel: raid6: using avx2x2 recovery algorithm Oct 29 20:39:11.547515 kernel: xor: automatically using best checksumming function avx Oct 29 20:39:11.547527 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 20:39:11.547540 kernel: BTRFS: device fsid a1caae43-5435-4dbf-a605-d80b95ba86fc devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (182) Oct 29 20:39:11.547553 kernel: BTRFS info (device dm-0): first mount of filesystem a1caae43-5435-4dbf-a605-d80b95ba86fc Oct 29 20:39:11.547808 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 20:39:11.547832 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 20:39:11.547845 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 20:39:11.547856 kernel: loop: module loaded Oct 29 20:39:11.547869 kernel: loop0: detected capacity change from 0 to 100136 Oct 29 20:39:11.547881 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 20:39:11.547896 systemd[1]: Successfully made /usr/ read-only. Oct 29 20:39:11.547924 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 20:39:11.547949 systemd[1]: Detected virtualization kvm. Oct 29 20:39:11.547962 systemd[1]: Detected architecture x86-64. Oct 29 20:39:11.547975 systemd[1]: Running in initrd. Oct 29 20:39:11.547988 systemd[1]: No hostname configured, using default hostname. Oct 29 20:39:11.548002 systemd[1]: Hostname set to . Oct 29 20:39:11.548015 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 20:39:11.548038 systemd[1]: Queued start job for default target initrd.target. Oct 29 20:39:11.548052 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 20:39:11.548066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 20:39:11.548079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 20:39:11.548094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 20:39:11.548108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 20:39:11.548132 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 20:39:11.548146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 20:39:11.548159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 20:39:11.548173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 20:39:11.548186 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 20:39:11.548199 systemd[1]: Reached target paths.target - Path Units. Oct 29 20:39:11.548223 systemd[1]: Reached target slices.target - Slice Units. Oct 29 20:39:11.548237 systemd[1]: Reached target swap.target - Swaps. Oct 29 20:39:11.548260 systemd[1]: Reached target timers.target - Timer Units. Oct 29 20:39:11.548273 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 20:39:11.548286 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 20:39:11.548309 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 20:39:11.548323 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 20:39:11.548345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 20:39:11.548358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 20:39:11.548372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 20:39:11.548385 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 20:39:11.548399 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 20:39:11.548412 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 20:39:11.548425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 20:39:11.548449 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 20:39:11.548480 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 20:39:11.548492 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 20:39:11.548505 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 20:39:11.548517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 20:39:11.548529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 20:39:11.548554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 20:39:11.548568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 20:39:11.548580 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 20:39:11.548593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 20:39:11.548680 systemd-journald[319]: Collecting audit messages is disabled. Oct 29 20:39:11.548713 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 20:39:11.548730 systemd-journald[319]: Journal started Oct 29 20:39:11.548914 systemd-journald[319]: Runtime Journal (/run/log/journal/b8172b06d8064a8e9c16dcdf07f33fba) is 6M, max 48.2M, 42.2M free. Oct 29 20:39:11.554479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 20:39:11.554737 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 20:39:11.563491 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 20:39:11.567473 kernel: Bridge firewalling registered Oct 29 20:39:11.567500 systemd-modules-load[321]: Inserted module 'br_netfilter' Oct 29 20:39:11.570663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 20:39:11.572762 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 20:39:11.592934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 20:39:11.601571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 20:39:11.608184 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 20:39:11.684747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 20:39:11.688998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 20:39:11.691334 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 20:39:11.708831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 20:39:11.710843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 20:39:11.718737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 20:39:11.725223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 20:39:11.749698 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=44244d247275a195a4f17c05cb1358191d6e164275fceaedb27ff0bc5031a050 Oct 29 20:39:11.795358 systemd-resolved[351]: Positive Trust Anchors: Oct 29 20:39:11.795379 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 20:39:11.795385 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 20:39:11.795427 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 20:39:11.835759 systemd-resolved[351]: Defaulting to hostname 'linux'. Oct 29 20:39:11.837642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 20:39:11.841375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 20:39:11.941527 kernel: Loading iSCSI transport class v2.0-870. Oct 29 20:39:11.956490 kernel: iscsi: registered transport (tcp) Oct 29 20:39:12.049516 kernel: iscsi: registered transport (qla4xxx) Oct 29 20:39:12.049643 kernel: QLogic iSCSI HBA Driver Oct 29 20:39:12.086555 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 20:39:12.126161 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 20:39:12.127694 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 20:39:12.207903 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 20:39:12.211534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 20:39:12.214066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 20:39:12.251142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 20:39:12.252971 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 20:39:12.286820 systemd-udevd[592]: Using default interface naming scheme 'v257'. Oct 29 20:39:12.300675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 20:39:12.306402 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 20:39:12.339926 dracut-pre-trigger[650]: rd.md=0: removing MD RAID activation Oct 29 20:39:12.362133 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 20:39:12.366487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 20:39:12.383472 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 20:39:12.387318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 20:39:12.433127 systemd-networkd[724]: lo: Link UP Oct 29 20:39:12.433139 systemd-networkd[724]: lo: Gained carrier Oct 29 20:39:12.434855 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 20:39:12.439705 systemd[1]: Reached target network.target - Network. Oct 29 20:39:12.496225 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 20:39:12.501993 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 20:39:12.585429 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 20:39:12.611970 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 20:39:12.628483 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 20:39:12.642332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 20:39:12.652439 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 20:39:12.790617 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 20:39:12.787672 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 20:39:12.787686 systemd-networkd[724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 20:39:12.787745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 20:39:12.814279 kernel: AES CTR mode by8 optimization enabled Oct 29 20:39:12.790611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 20:39:12.790680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 20:39:12.791041 systemd-networkd[724]: eth0: Link UP Oct 29 20:39:12.791314 systemd-networkd[724]: eth0: Gained carrier Oct 29 20:39:12.791330 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 20:39:12.796357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 20:39:12.813555 systemd-networkd[724]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 20:39:12.814633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 20:39:12.833701 disk-uuid[825]: Primary Header is updated. Oct 29 20:39:12.833701 disk-uuid[825]: Secondary Entries is updated. Oct 29 20:39:12.833701 disk-uuid[825]: Secondary Header is updated. Oct 29 20:39:12.920508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 20:39:12.947974 systemd-resolved[351]: Detected conflict on linux IN A 10.0.0.139 Oct 29 20:39:12.947995 systemd-resolved[351]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Oct 29 20:39:12.951376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 20:39:12.976122 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 20:39:12.978587 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 20:39:12.980605 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 20:39:12.984753 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 20:39:13.024005 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 20:39:13.961025 disk-uuid[834]: Warning: The kernel is still using the old partition table. Oct 29 20:39:13.961025 disk-uuid[834]: The new table will be used at the next reboot or after you Oct 29 20:39:13.961025 disk-uuid[834]: run partprobe(8) or kpartx(8) Oct 29 20:39:13.961025 disk-uuid[834]: The operation has completed successfully. Oct 29 20:39:13.971010 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 20:39:13.971161 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 20:39:13.975119 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 20:39:14.021970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Oct 29 20:39:14.022052 kernel: BTRFS info (device vda6): first mount of filesystem 768d7b05-6539-4ab1-a5a3-7a6adc492890 Oct 29 20:39:14.022069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 20:39:14.027571 kernel: BTRFS info (device vda6): turning on async discard Oct 29 20:39:14.027650 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 20:39:14.036514 kernel: BTRFS info (device vda6): last unmount of filesystem 768d7b05-6539-4ab1-a5a3-7a6adc492890 Oct 29 20:39:14.037395 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 20:39:14.041921 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 20:39:14.240114 systemd-networkd[724]: eth0: Gained IPv6LL Oct 29 20:39:14.381468 ignition[883]: Ignition 2.22.0 Oct 29 20:39:14.381488 ignition[883]: Stage: fetch-offline Oct 29 20:39:14.381601 ignition[883]: no configs at "/usr/lib/ignition/base.d" Oct 29 20:39:14.381618 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 20:39:14.381792 ignition[883]: parsed url from cmdline: "" Oct 29 20:39:14.381816 ignition[883]: no config URL provided Oct 29 20:39:14.381825 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 20:39:14.381840 ignition[883]: no config at "/usr/lib/ignition/user.ign" Oct 29 20:39:14.381892 ignition[883]: op(1): [started] loading QEMU firmware config module Oct 29 20:39:14.381897 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 20:39:14.399009 ignition[883]: op(1): [finished] loading QEMU firmware config module Oct 29 20:39:14.478670 ignition[883]: parsing config with SHA512: 09277d331eb0ecd6a7c002b18735889b40fec248be936b1ec1b32105cd5349fac0d63196a841497d945491a1f0ad28f166534a50b74b18d3c5a7fb951c5b4b34 Oct 29 20:39:14.487424 unknown[883]: fetched base config from "system" Oct 29 20:39:14.487885 ignition[883]: fetch-offline: fetch-offline passed Oct 29 20:39:14.487438 unknown[883]: fetched user config from "qemu" Oct 29 20:39:14.487959 ignition[883]: Ignition finished successfully Oct 29 20:39:14.491368 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 20:39:14.506144 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 20:39:14.507184 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 20:39:14.573859 ignition[893]: Ignition 2.22.0 Oct 29 20:39:14.573876 ignition[893]: Stage: kargs Oct 29 20:39:14.574041 ignition[893]: no configs at "/usr/lib/ignition/base.d" Oct 29 20:39:14.574052 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 20:39:14.574764 ignition[893]: kargs: kargs passed Oct 29 20:39:14.574810 ignition[893]: Ignition finished successfully Oct 29 20:39:14.583246 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 20:39:14.586424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 20:39:14.688102 ignition[901]: Ignition 2.22.0 Oct 29 20:39:14.688128 ignition[901]: Stage: disks Oct 29 20:39:14.688441 ignition[901]: no configs at "/usr/lib/ignition/base.d" Oct 29 20:39:14.688483 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 20:39:14.689525 ignition[901]: disks: disks passed Oct 29 20:39:14.689590 ignition[901]: Ignition finished successfully Oct 29 20:39:14.698348 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 20:39:14.701785 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 20:39:14.701889 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 20:39:14.709442 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 20:39:14.712644 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 20:39:14.712736 systemd[1]: Reached target basic.target - Basic System. Oct 29 20:39:14.717186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 20:39:14.765371 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 20:39:14.951045 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 20:39:14.952236 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 20:39:15.078498 kernel: EXT4-fs (vda9): mounted filesystem d23c4f8a-0a6a-46ea-b01b-8499e8245381 r/w with ordered data mode. Quota mode: none. Oct 29 20:39:15.078916 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 20:39:15.080988 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 20:39:15.085550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 20:39:15.090380 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 20:39:15.090753 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 20:39:15.090788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 20:39:15.090812 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 20:39:15.116197 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 20:39:15.119247 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 20:39:15.132401 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Oct 29 20:39:15.132433 kernel: BTRFS info (device vda6): first mount of filesystem 768d7b05-6539-4ab1-a5a3-7a6adc492890 Oct 29 20:39:15.132534 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 20:39:15.132551 kernel: BTRFS info (device vda6): turning on async discard Oct 29 20:39:15.132565 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 20:39:15.133954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 20:39:15.189056 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 20:39:15.193705 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Oct 29 20:39:15.199022 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 20:39:15.204692 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 20:39:15.304899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 20:39:15.306696 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 20:39:15.312710 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 20:39:15.337659 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 20:39:15.340305 kernel: BTRFS info (device vda6): last unmount of filesystem 768d7b05-6539-4ab1-a5a3-7a6adc492890 Oct 29 20:39:15.358680 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 20:39:15.602553 ignition[1033]: INFO : Ignition 2.22.0 Oct 29 20:39:15.602553 ignition[1033]: INFO : Stage: mount Oct 29 20:39:15.605777 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 20:39:15.605777 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 20:39:15.605777 ignition[1033]: INFO : mount: mount passed Oct 29 20:39:15.605777 ignition[1033]: INFO : Ignition finished successfully Oct 29 20:39:15.615221 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 20:39:15.621731 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 20:39:16.080978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 20:39:16.121491 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1045) Oct 29 20:39:16.124660 kernel: BTRFS info (device vda6): first mount of filesystem 768d7b05-6539-4ab1-a5a3-7a6adc492890 Oct 29 20:39:16.124688 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 20:39:16.128678 kernel: BTRFS info (device vda6): turning on async discard Oct 29 20:39:16.128830 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 20:39:16.130642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 20:39:16.185051 ignition[1062]: INFO : Ignition 2.22.0 Oct 29 20:39:16.185051 ignition[1062]: INFO : Stage: files Oct 29 20:39:16.192750 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 20:39:16.192750 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 20:39:16.192750 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Oct 29 20:39:16.192750 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 20:39:16.192750 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 20:39:16.204039 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 20:39:16.204039 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 20:39:16.204039 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 20:39:16.204039 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 20:39:16.204039 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 20:39:16.195593 unknown[1062]: wrote ssh authorized keys file for user: core Oct 29 20:39:31.484789 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz": net/http: TLS handshake timeout Oct 29 20:39:31.685203 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #2 Oct 29 20:39:41.402757 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 20:39:54.512087 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 20:39:54.512087 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 20:39:54.519403 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 20:39:54.519403 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 20:39:54.519403 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 20:39:54.519403 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 20:39:54.519403 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 20:39:54.519403 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 20:39:54.537611 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 20:39:54.540931 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 20:39:54.544679 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 20:39:54.544679 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 20:39:54.544679 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 20:39:54.544679 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 20:39:54.560164 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 29 20:39:54.969196 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 20:39:55.477878 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 20:39:55.477878 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 20:39:55.484382 ignition[1062]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 20:39:55.513828 ignition[1062]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 20:39:55.518223 ignition[1062]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 20:39:55.520894 ignition[1062]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 20:39:55.520894 ignition[1062]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 20:39:55.520894 ignition[1062]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 20:39:55.520894 ignition[1062]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 20:39:55.520894 ignition[1062]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 20:39:55.520894 ignition[1062]: INFO : files: files passed Oct 29 20:39:55.520894 ignition[1062]: INFO : Ignition finished successfully Oct 29 20:39:55.527430 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 20:39:55.535976 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 20:39:55.540346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 20:39:55.559781 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 20:39:55.559952 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 20:39:55.568134 initrd-setup-root-after-ignition[1093]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 20:39:55.573922 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 20:39:55.573922 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 20:39:55.579002 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 20:39:55.583306 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 20:39:55.583622 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 20:39:55.591090 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 20:39:55.669824 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 20:39:55.670023 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 20:39:55.672043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 20:39:55.677566 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 20:39:55.681446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 20:39:55.682676 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 20:39:55.727762 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 20:39:55.732347 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 20:39:55.758660 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 20:39:55.758890 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 20:39:55.762851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 20:39:55.764716 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 20:39:55.768322 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 20:39:55.768479 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 20:39:55.774378 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 20:39:55.776275 systemd[1]: Stopped target basic.target - Basic System. Oct 29 20:39:55.776808 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 20:39:55.782225 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 20:39:55.787502 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 20:39:55.788116 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 20:39:55.788964 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 20:39:55.796053 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 20:39:55.801175 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 20:39:55.803237 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 20:39:55.806484 systemd[1]: Stopped target swap.target - Swaps. Oct 29 20:39:55.810962 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 20:39:55.811118 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 20:39:55.817194 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 20:39:55.817360 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 20:39:55.820684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 20:39:55.820815 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 20:39:55.824477 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 20:39:55.824602 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 20:39:55.831764 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 20:39:55.831881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 20:39:55.833387 systemd[1]: Stopped target paths.target - Path Units. Oct 29 20:39:55.838045 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 20:39:55.838205 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 20:39:55.839788 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 20:39:55.844896 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 20:39:55.846343 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 20:39:55.846441 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 20:39:55.851021 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 20:39:55.851106 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 20:39:55.853478 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 20:39:55.853598 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 20:39:55.856361 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 20:39:55.856489 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 20:39:55.862760 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 20:39:55.865514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 20:39:55.865652 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 20:39:55.890321 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 20:39:55.892034 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 20:39:55.892281 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 20:39:55.895671 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 20:39:55.895897 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 20:39:55.899375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 20:39:55.899551 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 20:39:55.919432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 20:39:55.919586 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 20:39:55.923115 ignition[1119]: INFO : Ignition 2.22.0 Oct 29 20:39:55.923115 ignition[1119]: INFO : Stage: umount Oct 29 20:39:55.923115 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 20:39:55.923115 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 20:39:55.923115 ignition[1119]: INFO : umount: umount passed Oct 29 20:39:55.923115 ignition[1119]: INFO : Ignition finished successfully Oct 29 20:39:55.925635 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 20:39:55.925746 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 20:39:55.930998 systemd[1]: Stopped target network.target - Network. Oct 29 20:39:55.931188 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 20:39:55.931250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 20:39:55.934563 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 20:39:55.934629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 20:39:55.935100 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 20:39:55.935150 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 20:39:55.936298 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 20:39:55.936346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 20:39:55.945207 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 20:39:55.949392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 20:39:55.954808 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 20:39:55.955641 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 20:39:55.955762 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 20:39:55.959912 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 20:39:55.960130 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 20:39:55.964435 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 20:39:55.964597 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 20:39:55.972439 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 20:39:55.973911 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 20:39:55.973995 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 20:39:55.974437 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 20:39:55.974537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 20:39:55.983880 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 20:39:55.985021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 20:39:55.985129 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 20:39:55.985914 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 20:39:55.985971 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 20:39:55.991219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 20:39:55.991283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 20:39:55.992138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 20:39:56.030005 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 20:39:56.030227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 20:39:56.037306 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 20:39:56.037393 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 20:39:56.039948 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 20:39:56.040003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 20:39:56.041612 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 20:39:56.041669 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 20:39:56.045698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 20:39:56.045763 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 20:39:56.053920 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 20:39:56.053990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 20:39:56.068417 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 20:39:56.070297 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 20:39:56.070399 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 20:39:56.074338 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 20:39:56.074397 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 20:39:56.078636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 20:39:56.078704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 20:39:56.084808 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 20:39:56.086676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 20:39:56.094964 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 20:39:56.095124 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 20:39:56.097057 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 20:39:56.101421 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 20:39:56.131817 systemd[1]: Switching root. Oct 29 20:39:56.174237 systemd-journald[319]: Journal stopped Oct 29 20:39:58.565054 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Oct 29 20:39:58.565189 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 20:39:58.565219 kernel: SELinux: policy capability open_perms=1 Oct 29 20:39:58.565240 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 20:39:58.565299 kernel: SELinux: policy capability always_check_network=0 Oct 29 20:39:58.565323 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 20:39:58.565338 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 20:39:58.565352 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 20:39:58.565366 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 20:39:58.565382 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 20:39:58.565398 kernel: audit: type=1403 audit(1761770397.311:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 20:39:58.565427 systemd[1]: Successfully loaded SELinux policy in 74.772ms. Oct 29 20:39:58.565551 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.351ms. Oct 29 20:39:58.565570 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 20:39:58.565591 systemd[1]: Detected virtualization kvm. Oct 29 20:39:58.565606 systemd[1]: Detected architecture x86-64. Oct 29 20:39:58.565621 systemd[1]: Detected first boot. Oct 29 20:39:58.565649 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 20:39:58.565670 zram_generator::config[1165]: No configuration found. Oct 29 20:39:58.565688 kernel: Guest personality initialized and is inactive Oct 29 20:39:58.565704 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 20:39:58.565720 kernel: Initialized host personality Oct 29 20:39:58.565735 kernel: NET: Registered PF_VSOCK protocol family Oct 29 20:39:58.565747 systemd[1]: Populated /etc with preset unit settings. Oct 29 20:39:58.565762 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 20:39:58.565782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 20:39:58.565799 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 20:39:58.565813 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 20:39:58.565826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 20:39:58.565839 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 20:39:58.565851 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 20:39:58.565871 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 20:39:58.565886 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 20:39:58.565899 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 20:39:58.565921 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 20:39:58.565934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 20:39:58.565951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 20:39:58.565967 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 20:39:58.565988 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 20:39:58.566001 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 20:39:58.566014 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 20:39:58.566027 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 20:39:58.566043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 20:39:58.566069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 20:39:58.566091 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 20:39:58.566116 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 20:39:58.566133 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 20:39:58.566146 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 20:39:58.566161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 20:39:58.566174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 20:39:58.566187 systemd[1]: Reached target slices.target - Slice Units. Oct 29 20:39:58.566199 systemd[1]: Reached target swap.target - Swaps. Oct 29 20:39:58.566219 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 20:39:58.566232 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 20:39:58.566245 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 20:39:58.566257 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 20:39:58.566272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 20:39:58.566287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 20:39:58.566303 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 20:39:58.566329 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 20:39:58.566342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 20:39:58.566358 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 20:39:58.566374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:39:58.566388 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 20:39:58.566401 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 20:39:58.566416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 20:39:58.566438 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 20:39:58.566474 systemd[1]: Reached target machines.target - Containers. Oct 29 20:39:58.566501 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 20:39:58.566515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 20:39:58.566527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 20:39:58.566545 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 20:39:58.566558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 20:39:58.566597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 20:39:58.566617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 20:39:58.566630 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 20:39:58.566643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 20:39:58.566656 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 20:39:58.566668 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 20:39:58.566691 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 20:39:58.566712 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 20:39:58.566743 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 20:39:58.566765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 20:39:58.566779 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 20:39:58.566793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 20:39:58.566810 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 20:39:58.566838 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 20:39:58.566882 systemd-journald[1236]: Collecting audit messages is disabled. Oct 29 20:39:58.566931 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 20:39:58.566954 systemd-journald[1236]: Journal started Oct 29 20:39:58.566979 systemd-journald[1236]: Runtime Journal (/run/log/journal/b8172b06d8064a8e9c16dcdf07f33fba) is 6M, max 48.2M, 42.2M free. Oct 29 20:39:58.662600 kernel: fuse: init (API version 7.41) Oct 29 20:39:58.116089 systemd[1]: Queued start job for default target multi-user.target. Oct 29 20:39:58.136665 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 20:39:58.137288 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 20:39:58.671187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 20:39:58.679577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:39:58.683694 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 20:39:58.687680 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 20:39:58.689724 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 20:39:58.691757 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 20:39:58.693683 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 20:39:58.695781 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 20:39:58.698218 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 20:39:58.700427 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 20:39:58.703048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 20:39:58.705626 kernel: ACPI: bus type drm_connector registered Oct 29 20:39:58.707048 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 20:39:58.707442 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 20:39:58.709915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 20:39:58.710186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 20:39:58.712599 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 20:39:58.712817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 20:39:58.715075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 20:39:58.715297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 20:39:58.717989 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 20:39:58.718264 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 20:39:58.720564 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 20:39:58.720794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 20:39:58.723196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 20:39:58.725682 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 20:39:58.729538 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 20:39:58.732592 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 20:39:58.749190 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 20:39:58.751730 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 20:39:58.755727 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 20:39:58.759353 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 20:39:58.761425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 20:39:58.761495 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 20:39:58.764629 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 20:39:58.767508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 20:39:58.772492 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 20:39:58.778502 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 20:39:58.781346 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 20:39:58.783147 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 20:39:58.785381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 20:39:58.787739 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 20:39:58.788971 systemd-journald[1236]: Time spent on flushing to /var/log/journal/b8172b06d8064a8e9c16dcdf07f33fba is 26.943ms for 966 entries. Oct 29 20:39:58.788971 systemd-journald[1236]: System Journal (/var/log/journal/b8172b06d8064a8e9c16dcdf07f33fba) is 8M, max 163.5M, 155.5M free. Oct 29 20:39:58.825009 systemd-journald[1236]: Received client request to flush runtime journal. Oct 29 20:39:58.793785 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 20:39:58.797789 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 20:39:58.805080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 20:39:58.808485 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 20:39:58.810812 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 20:39:58.828467 kernel: loop1: detected capacity change from 0 to 229808 Oct 29 20:39:58.828331 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 20:39:58.830935 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 20:39:58.834330 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 20:39:58.875620 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 20:39:58.883716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 20:39:58.886848 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 20:39:58.892987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 20:39:58.896359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 20:39:58.913182 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 20:39:58.915552 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 20:39:58.925479 kernel: loop2: detected capacity change from 0 to 128912 Oct 29 20:39:58.946886 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Oct 29 20:39:58.946922 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Oct 29 20:39:58.955653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 20:39:58.967504 kernel: loop3: detected capacity change from 0 to 111544 Oct 29 20:39:58.977872 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 20:39:59.139086 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 20:39:59.936496 kernel: loop4: detected capacity change from 0 to 229808 Oct 29 20:39:59.946541 kernel: loop5: detected capacity change from 0 to 128912 Oct 29 20:39:59.956483 kernel: loop6: detected capacity change from 0 to 111544 Oct 29 20:39:59.966824 (sd-merge)[1314]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 20:39:59.972416 (sd-merge)[1314]: Merged extensions into '/usr'. Oct 29 20:39:59.979590 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 20:39:59.979623 systemd[1]: Reloading... Oct 29 20:39:59.993312 systemd-resolved[1299]: Positive Trust Anchors: Oct 29 20:39:59.993332 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 20:39:59.993337 systemd-resolved[1299]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 20:39:59.993370 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 20:39:59.999373 systemd-resolved[1299]: Defaulting to hostname 'linux'. Oct 29 20:40:00.048490 zram_generator::config[1342]: No configuration found. Oct 29 20:40:00.273284 systemd[1]: Reloading finished in 293 ms. Oct 29 20:40:00.300253 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 20:40:00.302723 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 20:40:00.307864 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 20:40:00.323611 systemd[1]: Starting ensure-sysext.service... Oct 29 20:40:00.326816 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 20:40:00.341520 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Oct 29 20:40:00.341541 systemd[1]: Reloading... Oct 29 20:40:00.347612 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 20:40:00.347972 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 20:40:00.348351 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 20:40:00.348704 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 20:40:00.349924 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 20:40:00.350344 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Oct 29 20:40:00.350493 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Oct 29 20:40:00.356751 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 20:40:00.356834 systemd-tmpfiles[1379]: Skipping /boot Oct 29 20:40:00.367864 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 20:40:00.367983 systemd-tmpfiles[1379]: Skipping /boot Oct 29 20:40:00.408483 zram_generator::config[1409]: No configuration found. Oct 29 20:40:00.844114 systemd[1]: Reloading finished in 502 ms. Oct 29 20:40:00.866217 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 20:40:00.892608 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 20:40:00.904743 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 20:40:00.908239 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 20:40:00.919324 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 20:40:00.925202 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 20:40:00.929974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 20:40:00.937811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 20:40:00.945526 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:40:00.945756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 20:40:00.948159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 20:40:00.957792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 20:40:00.967138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 20:40:00.969518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 20:40:00.969667 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 20:40:00.969798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:40:00.971796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 20:40:00.972092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 20:40:00.975065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 20:40:00.975342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 20:40:00.985202 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Oct 29 20:40:00.986758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 20:40:00.996399 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 20:40:01.001624 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 20:40:01.002964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 20:40:01.012289 augenrules[1479]: No rules Oct 29 20:40:01.014281 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 20:40:01.015535 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 20:40:01.020061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:40:01.021401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 20:40:01.023936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 20:40:01.028154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 20:40:01.032717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 20:40:01.032935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 20:40:01.033094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:40:01.036892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:40:01.043965 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 20:40:01.047069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 20:40:01.048916 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 20:40:01.053957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 20:40:01.056185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 20:40:01.056336 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 20:40:01.056490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 20:40:01.057941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 20:40:01.065194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 20:40:01.070828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 20:40:01.075144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 20:40:01.076533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 20:40:01.079900 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 20:40:01.082691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 20:40:01.090095 systemd[1]: Finished ensure-sysext.service. Oct 29 20:40:01.092937 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 20:40:01.093188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 20:40:01.098886 augenrules[1489]: /sbin/augenrules: No change Oct 29 20:40:01.103991 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 20:40:01.112788 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 20:40:01.115039 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 20:40:01.115129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 20:40:01.115258 augenrules[1532]: No rules Oct 29 20:40:01.117181 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 20:40:01.119565 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 20:40:01.119972 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 20:40:01.120258 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 20:40:01.220994 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 20:40:01.242334 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 20:40:01.244931 systemd-networkd[1536]: lo: Link UP Oct 29 20:40:01.244941 systemd-networkd[1536]: lo: Gained carrier Oct 29 20:40:01.245264 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 20:40:01.247852 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 20:40:01.248602 systemd-networkd[1536]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 20:40:01.248615 systemd-networkd[1536]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 20:40:01.249418 systemd-networkd[1536]: eth0: Link UP Oct 29 20:40:01.249976 systemd-networkd[1536]: eth0: Gained carrier Oct 29 20:40:01.250006 systemd-networkd[1536]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 20:40:01.252091 systemd[1]: Reached target network.target - Network. Oct 29 20:40:01.256185 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 20:40:01.260336 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 20:40:01.280232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 20:40:01.561288 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 20:40:02.045649 systemd-networkd[1536]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 20:40:02.048400 systemd-timesyncd[1539]: Network configuration changed, trying to establish connection. Oct 29 20:40:02.049639 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 20:40:02.049874 systemd-timesyncd[1539]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 20:40:02.050715 systemd-timesyncd[1539]: Initial clock synchronization to Wed 2025-10-29 20:40:02.202139 UTC. Oct 29 20:40:02.061524 kernel: ACPI: button: Power Button [PWRF] Oct 29 20:40:02.134500 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 20:40:02.308145 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 20:40:02.562335 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 20:40:02.562800 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 20:40:02.577783 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 20:40:02.634546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 20:40:02.770863 kernel: kvm_amd: TSC scaling supported Oct 29 20:40:02.770977 kernel: kvm_amd: Nested Virtualization enabled Oct 29 20:40:02.771002 kernel: kvm_amd: Nested Paging enabled Oct 29 20:40:02.773360 kernel: kvm_amd: LBR virtualization supported Oct 29 20:40:02.773389 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 20:40:02.774805 kernel: kvm_amd: Virtual GIF supported Oct 29 20:40:02.789639 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 20:40:02.803503 kernel: EDAC MC: Ver: 3.0.0 Oct 29 20:40:02.805174 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 20:40:02.807089 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 20:40:02.833664 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 20:40:02.899480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 20:40:02.904013 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 20:40:02.906226 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 20:40:02.908626 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 20:40:02.910919 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 20:40:02.913336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 20:40:02.915536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 20:40:02.918018 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 20:40:02.920231 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 20:40:02.920265 systemd[1]: Reached target paths.target - Path Units. Oct 29 20:40:02.921820 systemd[1]: Reached target timers.target - Timer Units. Oct 29 20:40:02.924627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 20:40:02.928412 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 20:40:02.932995 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 20:40:02.935745 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 20:40:02.938160 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 20:40:02.943917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 20:40:02.946411 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 20:40:02.949739 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 20:40:02.952609 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 20:40:02.954492 systemd[1]: Reached target basic.target - Basic System. Oct 29 20:40:02.956406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 20:40:02.956482 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 20:40:02.959106 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 20:40:02.963031 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 20:40:02.967751 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 20:40:02.972620 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 20:40:02.981424 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 20:40:02.983382 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 20:40:02.984688 jq[1592]: false Oct 29 20:40:02.985147 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 20:40:02.988994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 20:40:02.992699 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 20:40:02.997228 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 20:40:03.002784 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 20:40:03.006942 oslogin_cache_refresh[1594]: Refreshing passwd entry cache Oct 29 20:40:03.007991 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing passwd entry cache Oct 29 20:40:03.010792 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 20:40:03.012626 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 20:40:03.013243 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 20:40:03.014308 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 20:40:03.017657 extend-filesystems[1593]: Found /dev/vda6 Oct 29 20:40:03.018185 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 20:40:03.024302 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting users, quitting Oct 29 20:40:03.024302 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 20:40:03.024302 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing group entry cache Oct 29 20:40:03.023670 oslogin_cache_refresh[1594]: Failure getting users, quitting Oct 29 20:40:03.023708 oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 20:40:03.023780 oslogin_cache_refresh[1594]: Refreshing group entry cache Oct 29 20:40:03.026769 extend-filesystems[1593]: Found /dev/vda9 Oct 29 20:40:03.029489 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 20:40:03.032044 extend-filesystems[1593]: Checking size of /dev/vda9 Oct 29 20:40:03.038898 jq[1609]: true Oct 29 20:40:03.032212 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 20:40:03.039302 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting groups, quitting Oct 29 20:40:03.039302 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 20:40:03.033715 oslogin_cache_refresh[1594]: Failure getting groups, quitting Oct 29 20:40:03.032588 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 20:40:03.033732 oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 20:40:03.035566 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 20:40:03.035823 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 20:40:03.039277 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 20:40:03.039597 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 20:40:03.043899 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 20:40:03.044271 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 20:40:03.049853 systemd-networkd[1536]: eth0: Gained IPv6LL Oct 29 20:40:03.057459 extend-filesystems[1593]: Resized partition /dev/vda9 Oct 29 20:40:03.059525 update_engine[1608]: I20251029 20:40:03.057819 1608 main.cc:92] Flatcar Update Engine starting Oct 29 20:40:03.062787 extend-filesystems[1631]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 20:40:03.070970 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 20:40:03.062843 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 20:40:03.077855 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 20:40:03.087254 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 20:40:03.096862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:03.104829 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 20:40:03.110236 jq[1620]: true Oct 29 20:40:03.169511 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 20:40:03.273455 update_engine[1608]: I20251029 20:40:03.231194 1608 update_check_scheduler.cc:74] Next update check in 9m32s Oct 29 20:40:03.273561 tar[1617]: linux-amd64/LICENSE Oct 29 20:40:03.224065 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 20:40:03.223794 dbus-daemon[1590]: [system] SELinux support is enabled Oct 29 20:40:03.228649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 20:40:03.274356 tar[1617]: linux-amd64/helm Oct 29 20:40:03.274387 extend-filesystems[1631]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 20:40:03.274387 extend-filesystems[1631]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 20:40:03.274387 extend-filesystems[1631]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 20:40:03.228675 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 20:40:03.282894 extend-filesystems[1593]: Resized filesystem in /dev/vda9 Oct 29 20:40:03.284399 sshd_keygen[1621]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 20:40:03.233346 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 20:40:03.233382 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 20:40:03.236717 systemd[1]: Started update-engine.service - Update Engine. Oct 29 20:40:03.263525 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 20:40:03.266412 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 20:40:03.266694 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 20:40:03.269027 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 20:40:03.275591 systemd-logind[1604]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 20:40:03.275620 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 20:40:03.277314 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 20:40:03.278204 systemd-logind[1604]: New seat seat0. Oct 29 20:40:03.296085 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 20:40:03.298437 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 20:40:03.306103 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 20:40:03.311057 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 20:40:03.314232 bash[1669]: Updated "/home/core/.ssh/authorized_keys" Oct 29 20:40:03.314253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 20:40:03.319615 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 20:40:03.321717 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 20:40:03.378653 locksmithd[1670]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 20:40:03.380008 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 20:40:03.380315 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 20:40:03.386171 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 20:40:03.452443 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 20:40:03.459129 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 20:40:03.468843 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 20:40:03.471255 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 20:40:03.568417 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 20:40:03.573363 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:48284.service - OpenSSH per-connection server daemon (10.0.0.1:48284). Oct 29 20:40:03.770573 containerd[1634]: time="2025-10-29T20:40:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 20:40:03.772498 containerd[1634]: time="2025-10-29T20:40:03.772334861Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 20:40:03.783628 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 48284 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:03.787776 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:03.796420 containerd[1634]: time="2025-10-29T20:40:03.796361287Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="33.896µs" Oct 29 20:40:03.796420 containerd[1634]: time="2025-10-29T20:40:03.796406883Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 20:40:03.796568 containerd[1634]: time="2025-10-29T20:40:03.796540098Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 20:40:03.796872 containerd[1634]: time="2025-10-29T20:40:03.796814532Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 20:40:03.796872 containerd[1634]: time="2025-10-29T20:40:03.796854747Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 20:40:03.796942 containerd[1634]: time="2025-10-29T20:40:03.796894238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797013 containerd[1634]: time="2025-10-29T20:40:03.796985256Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797013 containerd[1634]: time="2025-10-29T20:40:03.797006115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797437 containerd[1634]: time="2025-10-29T20:40:03.797385392Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797437 containerd[1634]: time="2025-10-29T20:40:03.797414059Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797437 containerd[1634]: time="2025-10-29T20:40:03.797429405Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797437 containerd[1634]: time="2025-10-29T20:40:03.797441993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 20:40:03.797621 containerd[1634]: time="2025-10-29T20:40:03.797584448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 20:40:03.798317 containerd[1634]: time="2025-10-29T20:40:03.798187641Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 20:40:03.798382 containerd[1634]: time="2025-10-29T20:40:03.798341500Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 20:40:03.798382 containerd[1634]: time="2025-10-29T20:40:03.798358203Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 20:40:03.798441 containerd[1634]: time="2025-10-29T20:40:03.798402481Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 20:40:03.798763 containerd[1634]: time="2025-10-29T20:40:03.798718500Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 20:40:03.798852 containerd[1634]: time="2025-10-29T20:40:03.798816399Z" level=info msg="metadata content store policy set" policy=shared Oct 29 20:40:03.801417 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 20:40:03.807524 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 20:40:03.808996 containerd[1634]: time="2025-10-29T20:40:03.808906537Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 20:40:03.809828 containerd[1634]: time="2025-10-29T20:40:03.809777845Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 20:40:03.809869 containerd[1634]: time="2025-10-29T20:40:03.809841256Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.809899503Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.809955379Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.809996514Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810036352Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810067093Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810083705Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810098661Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810110913Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810144747Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810359669Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810406878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 20:40:03.811542 containerd[1634]: time="2025-10-29T20:40:03.810439325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.810458519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821632201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821667210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821695868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821713153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821731990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821756166Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821791523Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.821974509Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.822001237Z" level=info msg="Start snapshots syncer" Oct 29 20:40:03.822438 containerd[1634]: time="2025-10-29T20:40:03.822036839Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 20:40:03.821698 systemd-logind[1604]: New session 1 of user core. Oct 29 20:40:03.827410 containerd[1634]: time="2025-10-29T20:40:03.827290893Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 20:40:03.827728 containerd[1634]: time="2025-10-29T20:40:03.827496432Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 20:40:03.832731 containerd[1634]: time="2025-10-29T20:40:03.832663266Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 20:40:03.832998 containerd[1634]: time="2025-10-29T20:40:03.832954965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 20:40:03.833087 containerd[1634]: time="2025-10-29T20:40:03.832999815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 20:40:03.833087 containerd[1634]: time="2025-10-29T20:40:03.833039050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 20:40:03.833148 containerd[1634]: time="2025-10-29T20:40:03.833088639Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 20:40:03.833148 containerd[1634]: time="2025-10-29T20:40:03.833108895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 20:40:03.833206 containerd[1634]: time="2025-10-29T20:40:03.833159432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 20:40:03.833206 containerd[1634]: time="2025-10-29T20:40:03.833174451Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 20:40:03.833255 containerd[1634]: time="2025-10-29T20:40:03.833213900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 20:40:03.833255 containerd[1634]: time="2025-10-29T20:40:03.833227194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 20:40:03.833255 containerd[1634]: time="2025-10-29T20:40:03.833239731Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 20:40:03.833372 containerd[1634]: time="2025-10-29T20:40:03.833334731Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 20:40:03.833587 containerd[1634]: time="2025-10-29T20:40:03.833545070Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 20:40:03.833587 containerd[1634]: time="2025-10-29T20:40:03.833572032Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 20:40:03.833587 containerd[1634]: time="2025-10-29T20:40:03.833586878Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 20:40:03.833678 containerd[1634]: time="2025-10-29T20:40:03.833595974Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 20:40:03.833678 containerd[1634]: time="2025-10-29T20:40:03.833613218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 20:40:03.833678 containerd[1634]: time="2025-10-29T20:40:03.833627675Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 20:40:03.833678 containerd[1634]: time="2025-10-29T20:40:03.833653005Z" level=info msg="runtime interface created" Oct 29 20:40:03.833678 containerd[1634]: time="2025-10-29T20:40:03.833659366Z" level=info msg="created NRI interface" Oct 29 20:40:03.833678 containerd[1634]: time="2025-10-29T20:40:03.833672046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 20:40:03.833853 containerd[1634]: time="2025-10-29T20:40:03.833698591Z" level=info msg="Connect containerd service" Oct 29 20:40:03.833853 containerd[1634]: time="2025-10-29T20:40:03.833742248Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 20:40:03.835253 containerd[1634]: time="2025-10-29T20:40:03.835201087Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 20:40:03.885481 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 20:40:03.897575 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 20:40:03.924750 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:03.928250 systemd-logind[1604]: New session 2 of user core. Oct 29 20:40:04.223960 systemd[1717]: Queued start job for default target default.target. Oct 29 20:40:04.242199 systemd[1717]: Created slice app.slice - User Application Slice. Oct 29 20:40:04.242229 systemd[1717]: Reached target paths.target - Paths. Oct 29 20:40:04.242276 systemd[1717]: Reached target timers.target - Timers. Oct 29 20:40:04.245075 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 20:40:04.245347 containerd[1634]: time="2025-10-29T20:40:04.245217262Z" level=info msg="Start subscribing containerd event" Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245370801Z" level=info msg="Start recovering state" Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245723340Z" level=info msg="Start event monitor" Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245762423Z" level=info msg="Start cni network conf syncer for default" Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245792196Z" level=info msg="Start streaming server" Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245811651Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245823099Z" level=info msg="runtime interface starting up..." Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245833927Z" level=info msg="starting plugins..." Oct 29 20:40:04.246560 containerd[1634]: time="2025-10-29T20:40:04.245859371Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 20:40:04.247252 containerd[1634]: time="2025-10-29T20:40:04.247116750Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 20:40:04.247252 containerd[1634]: time="2025-10-29T20:40:04.247182245Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 20:40:04.247410 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 20:40:04.249466 containerd[1634]: time="2025-10-29T20:40:04.248877877Z" level=info msg="containerd successfully booted in 0.479100s" Oct 29 20:40:04.257902 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 20:40:04.258731 systemd[1717]: Reached target sockets.target - Sockets. Oct 29 20:40:04.259043 systemd[1717]: Reached target basic.target - Basic System. Oct 29 20:40:04.259103 systemd[1717]: Reached target default.target - Main User Target. Oct 29 20:40:04.259144 systemd[1717]: Startup finished in 281ms. Oct 29 20:40:04.259251 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 20:40:04.269487 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 20:40:04.378067 tar[1617]: linux-amd64/README.md Oct 29 20:40:04.390819 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:48290.service - OpenSSH per-connection server daemon (10.0.0.1:48290). Oct 29 20:40:04.455739 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 20:40:04.501601 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 48290 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:04.503950 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:04.511125 systemd-logind[1604]: New session 3 of user core. Oct 29 20:40:04.527786 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 20:40:04.549106 sshd[1747]: Connection closed by 10.0.0.1 port 48290 Oct 29 20:40:04.549702 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:04.561128 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:48290.service: Deactivated successfully. Oct 29 20:40:04.564002 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 20:40:04.565307 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Oct 29 20:40:04.569980 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:48294.service - OpenSSH per-connection server daemon (10.0.0.1:48294). Oct 29 20:40:04.573360 systemd-logind[1604]: Removed session 3. Oct 29 20:40:04.654600 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 48294 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:04.656635 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:04.662735 systemd-logind[1604]: New session 4 of user core. Oct 29 20:40:04.673713 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 20:40:04.691290 sshd[1758]: Connection closed by 10.0.0.1 port 48294 Oct 29 20:40:04.692935 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:04.698246 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:48294.service: Deactivated successfully. Oct 29 20:40:04.700511 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 20:40:04.701433 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Oct 29 20:40:04.703752 systemd-logind[1604]: Removed session 4. Oct 29 20:40:05.370031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:05.372527 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 20:40:05.374682 systemd[1]: Startup finished in 2.752s (kernel) + 46.290s (initrd) + 8.135s (userspace) = 57.179s. Oct 29 20:40:05.390756 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 20:40:06.294180 kubelet[1768]: E1029 20:40:06.294095 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 20:40:06.298638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 20:40:06.298849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 20:40:06.299325 systemd[1]: kubelet.service: Consumed 2.617s CPU time, 267.9M memory peak. Oct 29 20:40:14.797381 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:43786.service - OpenSSH per-connection server daemon (10.0.0.1:43786). Oct 29 20:40:14.863260 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 43786 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:14.864990 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:14.870208 systemd-logind[1604]: New session 5 of user core. Oct 29 20:40:14.880629 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 20:40:14.895092 sshd[1785]: Connection closed by 10.0.0.1 port 43786 Oct 29 20:40:14.895475 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:14.913894 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:43786.service: Deactivated successfully. Oct 29 20:40:14.916559 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 20:40:14.917507 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Oct 29 20:40:14.921641 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:43796.service - OpenSSH per-connection server daemon (10.0.0.1:43796). Oct 29 20:40:14.922399 systemd-logind[1604]: Removed session 5. Oct 29 20:40:14.992729 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 43796 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:14.995162 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:15.000590 systemd-logind[1604]: New session 6 of user core. Oct 29 20:40:15.018643 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 20:40:15.028546 sshd[1795]: Connection closed by 10.0.0.1 port 43796 Oct 29 20:40:15.028850 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:15.046160 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:43796.service: Deactivated successfully. Oct 29 20:40:15.047970 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 20:40:15.048838 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Oct 29 20:40:15.051443 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:43806.service - OpenSSH per-connection server daemon (10.0.0.1:43806). Oct 29 20:40:15.052080 systemd-logind[1604]: Removed session 6. Oct 29 20:40:15.110534 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 43806 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:15.112021 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:15.116340 systemd-logind[1604]: New session 7 of user core. Oct 29 20:40:15.127586 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 20:40:15.139946 sshd[1806]: Connection closed by 10.0.0.1 port 43806 Oct 29 20:40:15.140305 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:15.154072 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:43806.service: Deactivated successfully. Oct 29 20:40:15.155830 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 20:40:15.156560 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Oct 29 20:40:15.159079 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:43816.service - OpenSSH per-connection server daemon (10.0.0.1:43816). Oct 29 20:40:15.159671 systemd-logind[1604]: Removed session 7. Oct 29 20:40:15.219772 sshd[1812]: Accepted publickey for core from 10.0.0.1 port 43816 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:15.221124 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:15.225419 systemd-logind[1604]: New session 8 of user core. Oct 29 20:40:15.239594 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 20:40:15.261797 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 20:40:15.262108 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 20:40:15.276058 sudo[1818]: pam_unix(sudo:session): session closed for user root Oct 29 20:40:15.277426 sshd[1817]: Connection closed by 10.0.0.1 port 43816 Oct 29 20:40:15.277777 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:15.292947 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:43816.service: Deactivated successfully. Oct 29 20:40:15.294695 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 20:40:15.295389 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Oct 29 20:40:15.297851 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:43822.service - OpenSSH per-connection server daemon (10.0.0.1:43822). Oct 29 20:40:15.298405 systemd-logind[1604]: Removed session 8. Oct 29 20:40:15.371681 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 43822 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:15.373122 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:15.377432 systemd-logind[1604]: New session 9 of user core. Oct 29 20:40:15.392671 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 20:40:15.407609 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 20:40:15.407900 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 20:40:15.411811 sudo[1831]: pam_unix(sudo:session): session closed for user root Oct 29 20:40:15.420417 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 20:40:15.420766 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 20:40:15.430442 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 20:40:15.486282 augenrules[1855]: No rules Oct 29 20:40:15.487391 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 20:40:15.487895 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 20:40:15.489278 sudo[1830]: pam_unix(sudo:session): session closed for user root Oct 29 20:40:15.491172 sshd[1829]: Connection closed by 10.0.0.1 port 43822 Oct 29 20:40:15.491552 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Oct 29 20:40:15.509629 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:43822.service: Deactivated successfully. Oct 29 20:40:15.511507 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 20:40:15.512251 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Oct 29 20:40:15.514846 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:43828.service - OpenSSH per-connection server daemon (10.0.0.1:43828). Oct 29 20:40:15.515565 systemd-logind[1604]: Removed session 9. Oct 29 20:40:15.577292 sshd[1864]: Accepted publickey for core from 10.0.0.1 port 43828 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:40:15.578905 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:40:15.583531 systemd-logind[1604]: New session 10 of user core. Oct 29 20:40:15.597661 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 20:40:15.612680 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 20:40:15.612984 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 20:40:16.013113 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 20:40:16.030791 (dockerd)[1891]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 20:40:16.446628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 20:40:16.448491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:17.271390 dockerd[1891]: time="2025-10-29T20:40:17.271289372Z" level=info msg="Starting up" Oct 29 20:40:17.272232 dockerd[1891]: time="2025-10-29T20:40:17.272202594Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 20:40:17.325824 dockerd[1891]: time="2025-10-29T20:40:17.325747637Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 20:40:17.571795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:17.583742 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 20:40:17.771215 kubelet[1924]: E1029 20:40:17.771140 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 20:40:17.780859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 20:40:17.781094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 20:40:17.781561 systemd[1]: kubelet.service: Consumed 960ms CPU time, 110.9M memory peak. Oct 29 20:40:17.859948 dockerd[1891]: time="2025-10-29T20:40:17.859801601Z" level=info msg="Loading containers: start." Oct 29 20:40:17.871477 kernel: Initializing XFRM netlink socket Oct 29 20:40:18.172268 systemd-networkd[1536]: docker0: Link UP Oct 29 20:40:18.177799 dockerd[1891]: time="2025-10-29T20:40:18.177761577Z" level=info msg="Loading containers: done." Oct 29 20:40:18.194316 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2254526203-merged.mount: Deactivated successfully. Oct 29 20:40:18.195469 dockerd[1891]: time="2025-10-29T20:40:18.195374037Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 20:40:18.195605 dockerd[1891]: time="2025-10-29T20:40:18.195509538Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 20:40:18.195631 dockerd[1891]: time="2025-10-29T20:40:18.195611902Z" level=info msg="Initializing buildkit" Oct 29 20:40:18.225572 dockerd[1891]: time="2025-10-29T20:40:18.225487204Z" level=info msg="Completed buildkit initialization" Oct 29 20:40:18.232619 dockerd[1891]: time="2025-10-29T20:40:18.232578958Z" level=info msg="Daemon has completed initialization" Oct 29 20:40:18.232690 dockerd[1891]: time="2025-10-29T20:40:18.232644588Z" level=info msg="API listen on /run/docker.sock" Oct 29 20:40:18.232867 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 20:40:19.173963 containerd[1634]: time="2025-10-29T20:40:19.173868628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 29 20:40:19.906665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981370476.mount: Deactivated successfully. Oct 29 20:40:21.467079 containerd[1634]: time="2025-10-29T20:40:21.466995754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:21.468275 containerd[1634]: time="2025-10-29T20:40:21.468234031Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 29 20:40:21.469259 containerd[1634]: time="2025-10-29T20:40:21.469169810Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:21.472500 containerd[1634]: time="2025-10-29T20:40:21.472438688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:21.473875 containerd[1634]: time="2025-10-29T20:40:21.473807042Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.299854457s" Oct 29 20:40:21.473933 containerd[1634]: time="2025-10-29T20:40:21.473877775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 29 20:40:21.474866 containerd[1634]: time="2025-10-29T20:40:21.474814680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 29 20:40:23.333799 containerd[1634]: time="2025-10-29T20:40:23.333725413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:23.334762 containerd[1634]: time="2025-10-29T20:40:23.334695287Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 29 20:40:23.336529 containerd[1634]: time="2025-10-29T20:40:23.336490693Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:23.339509 containerd[1634]: time="2025-10-29T20:40:23.339472482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:23.340631 containerd[1634]: time="2025-10-29T20:40:23.340590157Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.865713338s" Oct 29 20:40:23.340631 containerd[1634]: time="2025-10-29T20:40:23.340623122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 29 20:40:23.341077 containerd[1634]: time="2025-10-29T20:40:23.341046492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 29 20:40:25.705017 containerd[1634]: time="2025-10-29T20:40:25.704941167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:25.706718 containerd[1634]: time="2025-10-29T20:40:25.706677882Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 29 20:40:25.707895 containerd[1634]: time="2025-10-29T20:40:25.707837474Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:25.711084 containerd[1634]: time="2025-10-29T20:40:25.711031136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:25.712136 containerd[1634]: time="2025-10-29T20:40:25.712095082Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.371020956s" Oct 29 20:40:25.712136 containerd[1634]: time="2025-10-29T20:40:25.712132991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 29 20:40:25.712644 containerd[1634]: time="2025-10-29T20:40:25.712606827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 29 20:40:27.414757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095205930.mount: Deactivated successfully. Oct 29 20:40:27.946166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 20:40:27.948935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:28.237637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:28.265717 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 20:40:28.462783 kubelet[2209]: E1029 20:40:28.462719 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 20:40:28.466972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 20:40:28.467179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 20:40:28.467647 systemd[1]: kubelet.service: Consumed 297ms CPU time, 108.9M memory peak. Oct 29 20:40:28.845324 containerd[1634]: time="2025-10-29T20:40:28.845234208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:28.846276 containerd[1634]: time="2025-10-29T20:40:28.846238750Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 29 20:40:28.847513 containerd[1634]: time="2025-10-29T20:40:28.847477899Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:28.850772 containerd[1634]: time="2025-10-29T20:40:28.850622056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:28.851260 containerd[1634]: time="2025-10-29T20:40:28.851210536Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.138480338s" Oct 29 20:40:28.851260 containerd[1634]: time="2025-10-29T20:40:28.851250198Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 29 20:40:28.851805 containerd[1634]: time="2025-10-29T20:40:28.851772830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 29 20:40:29.505559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278184709.mount: Deactivated successfully. Oct 29 20:40:31.092482 containerd[1634]: time="2025-10-29T20:40:31.092378352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:31.093210 containerd[1634]: time="2025-10-29T20:40:31.093165894Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 29 20:40:31.094681 containerd[1634]: time="2025-10-29T20:40:31.094625031Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:31.097636 containerd[1634]: time="2025-10-29T20:40:31.097590252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:31.098640 containerd[1634]: time="2025-10-29T20:40:31.098593797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.246781919s" Oct 29 20:40:31.098640 containerd[1634]: time="2025-10-29T20:40:31.098631194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 29 20:40:31.099158 containerd[1634]: time="2025-10-29T20:40:31.099123029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 20:40:31.925675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374524728.mount: Deactivated successfully. Oct 29 20:40:31.934283 containerd[1634]: time="2025-10-29T20:40:31.934178785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 20:40:31.935167 containerd[1634]: time="2025-10-29T20:40:31.935087028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 20:40:31.936328 containerd[1634]: time="2025-10-29T20:40:31.936261893Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 20:40:31.938697 containerd[1634]: time="2025-10-29T20:40:31.938641119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 20:40:31.939314 containerd[1634]: time="2025-10-29T20:40:31.939286922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 840.135237ms" Oct 29 20:40:31.939397 containerd[1634]: time="2025-10-29T20:40:31.939316441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 20:40:31.940076 containerd[1634]: time="2025-10-29T20:40:31.940025922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 29 20:40:32.777284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819294387.mount: Deactivated successfully. Oct 29 20:40:36.027710 containerd[1634]: time="2025-10-29T20:40:36.027633445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:36.029519 containerd[1634]: time="2025-10-29T20:40:36.029441336Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 29 20:40:36.031892 containerd[1634]: time="2025-10-29T20:40:36.031842271Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:36.035295 containerd[1634]: time="2025-10-29T20:40:36.035258713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:36.036375 containerd[1634]: time="2025-10-29T20:40:36.036318576Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.096244081s" Oct 29 20:40:36.036375 containerd[1634]: time="2025-10-29T20:40:36.036369265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 29 20:40:38.504215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 29 20:40:38.506288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:38.519383 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 20:40:38.519532 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 20:40:38.519887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:38.522419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:38.546924 systemd[1]: Reload requested from client PID 2364 ('systemctl') (unit session-10.scope)... Oct 29 20:40:38.546952 systemd[1]: Reloading... Oct 29 20:40:38.642484 zram_generator::config[2407]: No configuration found. Oct 29 20:40:39.200533 systemd[1]: Reloading finished in 653 ms. Oct 29 20:40:39.257099 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 20:40:39.257196 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 20:40:39.257496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:39.257534 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.2M memory peak. Oct 29 20:40:39.259138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:39.441254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:39.445399 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 20:40:39.507104 kubelet[2455]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 20:40:39.507104 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 20:40:39.507104 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 20:40:39.507104 kubelet[2455]: I1029 20:40:39.507069 2455 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 20:40:40.381257 kubelet[2455]: I1029 20:40:40.381176 2455 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 20:40:40.381257 kubelet[2455]: I1029 20:40:40.381221 2455 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 20:40:40.381517 kubelet[2455]: I1029 20:40:40.381507 2455 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 20:40:40.409018 kubelet[2455]: E1029 20:40:40.408969 2455 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 20:40:40.409498 kubelet[2455]: I1029 20:40:40.409317 2455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 20:40:40.421948 kubelet[2455]: I1029 20:40:40.421904 2455 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 20:40:40.427388 kubelet[2455]: I1029 20:40:40.427358 2455 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 20:40:40.427653 kubelet[2455]: I1029 20:40:40.427609 2455 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 20:40:40.427841 kubelet[2455]: I1029 20:40:40.427642 2455 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 20:40:40.427961 kubelet[2455]: I1029 20:40:40.427845 2455 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 20:40:40.427961 kubelet[2455]: I1029 20:40:40.427855 2455 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 20:40:40.428014 kubelet[2455]: I1029 20:40:40.427998 2455 state_mem.go:36] "Initialized new in-memory state store" Oct 29 20:40:40.430930 kubelet[2455]: I1029 20:40:40.430901 2455 kubelet.go:480] "Attempting to sync node with API server" Oct 29 20:40:40.430930 kubelet[2455]: I1029 20:40:40.430924 2455 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 20:40:40.432422 kubelet[2455]: I1029 20:40:40.432391 2455 kubelet.go:386] "Adding apiserver pod source" Oct 29 20:40:40.432422 kubelet[2455]: I1029 20:40:40.432421 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 20:40:40.438890 kubelet[2455]: I1029 20:40:40.438707 2455 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 20:40:40.438890 kubelet[2455]: E1029 20:40:40.438745 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 20:40:40.438890 kubelet[2455]: E1029 20:40:40.438845 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 20:40:40.439181 kubelet[2455]: I1029 20:40:40.439160 2455 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 20:40:40.439782 kubelet[2455]: W1029 20:40:40.439757 2455 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 20:40:40.442154 kubelet[2455]: I1029 20:40:40.442127 2455 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 20:40:40.442216 kubelet[2455]: I1029 20:40:40.442178 2455 server.go:1289] "Started kubelet" Oct 29 20:40:40.442479 kubelet[2455]: I1029 20:40:40.442399 2455 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 20:40:40.445467 kubelet[2455]: I1029 20:40:40.442995 2455 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 20:40:40.445467 kubelet[2455]: I1029 20:40:40.442996 2455 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 20:40:40.445467 kubelet[2455]: I1029 20:40:40.443981 2455 server.go:317] "Adding debug handlers to kubelet server" Oct 29 20:40:40.445467 kubelet[2455]: I1029 20:40:40.444043 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 20:40:40.445467 kubelet[2455]: I1029 20:40:40.444744 2455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 20:40:40.447276 kubelet[2455]: E1029 20:40:40.447244 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:40.447352 kubelet[2455]: I1029 20:40:40.447289 2455 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 20:40:40.448123 kubelet[2455]: I1029 20:40:40.447413 2455 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 20:40:40.448123 kubelet[2455]: I1029 20:40:40.447511 2455 reconciler.go:26] "Reconciler: start to sync state" Oct 29 20:40:40.448123 kubelet[2455]: E1029 20:40:40.447813 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 20:40:40.448123 kubelet[2455]: I1029 20:40:40.447914 2455 factory.go:223] Registration of the systemd container factory successfully Oct 29 20:40:40.448123 kubelet[2455]: I1029 20:40:40.447976 2455 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 20:40:40.448123 kubelet[2455]: E1029 20:40:40.448043 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Oct 29 20:40:40.449173 kubelet[2455]: E1029 20:40:40.446929 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187310e19e567734 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 20:40:40.44214866 +0000 UTC m=+0.992947293,LastTimestamp:2025-10-29 20:40:40.44214866 +0000 UTC m=+0.992947293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 20:40:40.449313 kubelet[2455]: E1029 20:40:40.449294 2455 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 20:40:40.449841 kubelet[2455]: I1029 20:40:40.449824 2455 factory.go:223] Registration of the containerd container factory successfully Oct 29 20:40:40.463350 kubelet[2455]: I1029 20:40:40.463323 2455 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 20:40:40.463350 kubelet[2455]: I1029 20:40:40.463341 2455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 20:40:40.463350 kubelet[2455]: I1029 20:40:40.463356 2455 state_mem.go:36] "Initialized new in-memory state store" Oct 29 20:40:40.467780 kubelet[2455]: I1029 20:40:40.467730 2455 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 20:40:40.469010 kubelet[2455]: I1029 20:40:40.468976 2455 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 20:40:40.469010 kubelet[2455]: I1029 20:40:40.469020 2455 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 20:40:40.469163 kubelet[2455]: I1029 20:40:40.469044 2455 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 20:40:40.469163 kubelet[2455]: I1029 20:40:40.469052 2455 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 20:40:40.469163 kubelet[2455]: E1029 20:40:40.469095 2455 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 20:40:40.470661 kubelet[2455]: E1029 20:40:40.470605 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 20:40:40.548380 kubelet[2455]: E1029 20:40:40.548329 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:40.569531 kubelet[2455]: E1029 20:40:40.569491 2455 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 20:40:40.648912 kubelet[2455]: E1029 20:40:40.648832 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:40.649189 kubelet[2455]: E1029 20:40:40.649156 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Oct 29 20:40:40.749913 kubelet[2455]: E1029 20:40:40.749887 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:40.770149 kubelet[2455]: E1029 20:40:40.770096 2455 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 20:40:40.850539 kubelet[2455]: E1029 20:40:40.850505 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:40.951534 kubelet[2455]: E1029 20:40:40.951477 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:41.050842 kubelet[2455]: E1029 20:40:41.050785 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Oct 29 20:40:41.051744 kubelet[2455]: E1029 20:40:41.051706 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:41.127891 kubelet[2455]: I1029 20:40:41.127827 2455 policy_none.go:49] "None policy: Start" Oct 29 20:40:41.127891 kubelet[2455]: I1029 20:40:41.127875 2455 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 20:40:41.127891 kubelet[2455]: I1029 20:40:41.127904 2455 state_mem.go:35] "Initializing new in-memory state store" Oct 29 20:40:41.136045 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 20:40:41.151368 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 20:40:41.151790 kubelet[2455]: E1029 20:40:41.151762 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 20:40:41.168596 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 20:40:41.170156 kubelet[2455]: E1029 20:40:41.170116 2455 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 20:40:41.170287 kubelet[2455]: E1029 20:40:41.170248 2455 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 20:40:41.170422 kubelet[2455]: I1029 20:40:41.170400 2455 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 20:40:41.170488 kubelet[2455]: I1029 20:40:41.170420 2455 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 20:40:41.170712 kubelet[2455]: I1029 20:40:41.170694 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 20:40:41.171595 kubelet[2455]: E1029 20:40:41.171571 2455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 20:40:41.171646 kubelet[2455]: E1029 20:40:41.171621 2455 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 20:40:41.272806 kubelet[2455]: I1029 20:40:41.272671 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 20:40:41.273021 kubelet[2455]: E1029 20:40:41.273000 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Oct 29 20:40:41.409595 kubelet[2455]: E1029 20:40:41.409531 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 20:40:41.474544 kubelet[2455]: I1029 20:40:41.474492 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 20:40:41.474982 kubelet[2455]: E1029 20:40:41.474934 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Oct 29 20:40:41.638286 kubelet[2455]: E1029 20:40:41.638149 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 20:40:41.653937 kubelet[2455]: E1029 20:40:41.653904 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 20:40:41.851568 kubelet[2455]: E1029 20:40:41.851504 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Oct 29 20:40:41.877130 kubelet[2455]: I1029 20:40:41.877067 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 20:40:41.877517 kubelet[2455]: E1029 20:40:41.877444 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Oct 29 20:40:41.976481 kubelet[2455]: E1029 20:40:41.976400 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 20:40:41.983642 systemd[1]: Created slice kubepods-burstable-pod846f03da65f98531b75d819dce03006e.slice - libcontainer container kubepods-burstable-pod846f03da65f98531b75d819dce03006e.slice. Oct 29 20:40:42.033801 kubelet[2455]: E1029 20:40:42.033737 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:42.035856 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 29 20:40:42.056144 kubelet[2455]: E1029 20:40:42.056092 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:42.056487 kubelet[2455]: I1029 20:40:42.056467 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/846f03da65f98531b75d819dce03006e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"846f03da65f98531b75d819dce03006e\") " pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:42.056553 kubelet[2455]: I1029 20:40:42.056497 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:42.056553 kubelet[2455]: I1029 20:40:42.056529 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:42.056553 kubelet[2455]: I1029 20:40:42.056545 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:42.056695 kubelet[2455]: I1029 20:40:42.056560 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/846f03da65f98531b75d819dce03006e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"846f03da65f98531b75d819dce03006e\") " pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:42.056695 kubelet[2455]: I1029 20:40:42.056577 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/846f03da65f98531b75d819dce03006e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"846f03da65f98531b75d819dce03006e\") " pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:42.056695 kubelet[2455]: I1029 20:40:42.056592 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:42.056695 kubelet[2455]: I1029 20:40:42.056606 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:42.056695 kubelet[2455]: I1029 20:40:42.056622 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:42.059156 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 29 20:40:42.061107 kubelet[2455]: E1029 20:40:42.061085 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:42.334903 kubelet[2455]: E1029 20:40:42.334758 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:42.335622 containerd[1634]: time="2025-10-29T20:40:42.335573129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:846f03da65f98531b75d819dce03006e,Namespace:kube-system,Attempt:0,}" Oct 29 20:40:42.357314 kubelet[2455]: E1029 20:40:42.357278 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:42.358110 containerd[1634]: time="2025-10-29T20:40:42.358046007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 29 20:40:42.362646 kubelet[2455]: E1029 20:40:42.362543 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:42.363784 containerd[1634]: time="2025-10-29T20:40:42.363678254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 29 20:40:42.402881 containerd[1634]: time="2025-10-29T20:40:42.402147354Z" level=info msg="connecting to shim f77769ecbacc02422b8e1467ed60da84ec3b6c601748874408c3872169f5aeb5" address="unix:///run/containerd/s/12c6e585e23e6d9c61c36cd574f735490a7511f8638f3079251d3b6bc9cdb928" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:40:42.403033 containerd[1634]: time="2025-10-29T20:40:42.402989768Z" level=info msg="connecting to shim 2a6bfbb5748d316fc490e993bcb31eec99a4c0e379eabf6783dd8bf37e493a9a" address="unix:///run/containerd/s/a69c1a4a42a09b7e17256cd09600a00cbb43db01eb0c477013db66be141423f7" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:40:42.411242 containerd[1634]: time="2025-10-29T20:40:42.411168926Z" level=info msg="connecting to shim 869a43a4bead49ac2864797285f92ab8a7163a3d773df6756835f424cba614f4" address="unix:///run/containerd/s/526433c5c6bbea8e333e6101c1a96f113e9501a37e93b8e073e9903e39bd7af3" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:40:42.447703 systemd[1]: Started cri-containerd-2a6bfbb5748d316fc490e993bcb31eec99a4c0e379eabf6783dd8bf37e493a9a.scope - libcontainer container 2a6bfbb5748d316fc490e993bcb31eec99a4c0e379eabf6783dd8bf37e493a9a. Oct 29 20:40:42.453363 systemd[1]: Started cri-containerd-f77769ecbacc02422b8e1467ed60da84ec3b6c601748874408c3872169f5aeb5.scope - libcontainer container f77769ecbacc02422b8e1467ed60da84ec3b6c601748874408c3872169f5aeb5. Oct 29 20:40:42.527000 kubelet[2455]: E1029 20:40:42.526928 2455 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 20:40:42.528761 systemd[1]: Started cri-containerd-869a43a4bead49ac2864797285f92ab8a7163a3d773df6756835f424cba614f4.scope - libcontainer container 869a43a4bead49ac2864797285f92ab8a7163a3d773df6756835f424cba614f4. Oct 29 20:40:42.672787 containerd[1634]: time="2025-10-29T20:40:42.672646723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a6bfbb5748d316fc490e993bcb31eec99a4c0e379eabf6783dd8bf37e493a9a\"" Oct 29 20:40:42.675432 containerd[1634]: time="2025-10-29T20:40:42.675394574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:846f03da65f98531b75d819dce03006e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f77769ecbacc02422b8e1467ed60da84ec3b6c601748874408c3872169f5aeb5\"" Oct 29 20:40:42.676387 kubelet[2455]: E1029 20:40:42.676354 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:42.676739 kubelet[2455]: E1029 20:40:42.676427 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:42.677046 containerd[1634]: time="2025-10-29T20:40:42.676988563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"869a43a4bead49ac2864797285f92ab8a7163a3d773df6756835f424cba614f4\"" Oct 29 20:40:42.677773 kubelet[2455]: E1029 20:40:42.677752 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:42.678829 kubelet[2455]: I1029 20:40:42.678795 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 20:40:42.679350 kubelet[2455]: E1029 20:40:42.679199 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Oct 29 20:40:42.682151 containerd[1634]: time="2025-10-29T20:40:42.682089882Z" level=info msg="CreateContainer within sandbox \"f77769ecbacc02422b8e1467ed60da84ec3b6c601748874408c3872169f5aeb5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 20:40:42.685873 containerd[1634]: time="2025-10-29T20:40:42.685830381Z" level=info msg="CreateContainer within sandbox \"2a6bfbb5748d316fc490e993bcb31eec99a4c0e379eabf6783dd8bf37e493a9a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 20:40:42.688194 containerd[1634]: time="2025-10-29T20:40:42.688159420Z" level=info msg="CreateContainer within sandbox \"869a43a4bead49ac2864797285f92ab8a7163a3d773df6756835f424cba614f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 20:40:42.695815 containerd[1634]: time="2025-10-29T20:40:42.695778822Z" level=info msg="Container 4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:40:42.699433 containerd[1634]: time="2025-10-29T20:40:42.699335677Z" level=info msg="Container ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:40:42.703590 containerd[1634]: time="2025-10-29T20:40:42.703554962Z" level=info msg="CreateContainer within sandbox \"f77769ecbacc02422b8e1467ed60da84ec3b6c601748874408c3872169f5aeb5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419\"" Oct 29 20:40:42.704208 containerd[1634]: time="2025-10-29T20:40:42.704171314Z" level=info msg="StartContainer for \"4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419\"" Oct 29 20:40:42.704677 containerd[1634]: time="2025-10-29T20:40:42.704642596Z" level=info msg="Container 9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:40:42.705593 containerd[1634]: time="2025-10-29T20:40:42.705554725Z" level=info msg="connecting to shim 4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419" address="unix:///run/containerd/s/12c6e585e23e6d9c61c36cd574f735490a7511f8638f3079251d3b6bc9cdb928" protocol=ttrpc version=3 Oct 29 20:40:42.713731 containerd[1634]: time="2025-10-29T20:40:42.713616118Z" level=info msg="CreateContainer within sandbox \"2a6bfbb5748d316fc490e993bcb31eec99a4c0e379eabf6783dd8bf37e493a9a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3\"" Oct 29 20:40:42.714229 containerd[1634]: time="2025-10-29T20:40:42.714194783Z" level=info msg="StartContainer for \"ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3\"" Oct 29 20:40:42.715212 containerd[1634]: time="2025-10-29T20:40:42.715181868Z" level=info msg="connecting to shim ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3" address="unix:///run/containerd/s/a69c1a4a42a09b7e17256cd09600a00cbb43db01eb0c477013db66be141423f7" protocol=ttrpc version=3 Oct 29 20:40:42.717136 containerd[1634]: time="2025-10-29T20:40:42.717097916Z" level=info msg="CreateContainer within sandbox \"869a43a4bead49ac2864797285f92ab8a7163a3d773df6756835f424cba614f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488\"" Oct 29 20:40:42.717731 containerd[1634]: time="2025-10-29T20:40:42.717687535Z" level=info msg="StartContainer for \"9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488\"" Oct 29 20:40:42.718682 containerd[1634]: time="2025-10-29T20:40:42.718655310Z" level=info msg="connecting to shim 9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488" address="unix:///run/containerd/s/526433c5c6bbea8e333e6101c1a96f113e9501a37e93b8e073e9903e39bd7af3" protocol=ttrpc version=3 Oct 29 20:40:42.733625 systemd[1]: Started cri-containerd-4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419.scope - libcontainer container 4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419. Oct 29 20:40:42.737908 systemd[1]: Started cri-containerd-ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3.scope - libcontainer container ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3. Oct 29 20:40:42.741582 systemd[1]: Started cri-containerd-9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488.scope - libcontainer container 9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488. Oct 29 20:40:42.826612 kubelet[2455]: E1029 20:40:42.826513 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187310e19e567734 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 20:40:40.44214866 +0000 UTC m=+0.992947293,LastTimestamp:2025-10-29 20:40:40.44214866 +0000 UTC m=+0.992947293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 20:40:42.849926 containerd[1634]: time="2025-10-29T20:40:42.849868645Z" level=info msg="StartContainer for \"4687eb6126648ec2c5857a8ca97aa536cd4f4ecd59a97b10e0dadd2eaaccc419\" returns successfully" Oct 29 20:40:42.854472 containerd[1634]: time="2025-10-29T20:40:42.852849101Z" level=info msg="StartContainer for \"9676f4b46c330ee7d99fe5ff2ad205ee3bb5ae9702a2443b4bfbacd20763e488\" returns successfully" Oct 29 20:40:42.854472 containerd[1634]: time="2025-10-29T20:40:42.853142170Z" level=info msg="StartContainer for \"ba13dfd7753a9ab71e1ba591c7f352c6058f35649891d12b94deba97d42b39e3\" returns successfully" Oct 29 20:40:43.536494 kubelet[2455]: E1029 20:40:43.536347 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:43.537070 kubelet[2455]: E1029 20:40:43.537011 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:43.538042 kubelet[2455]: E1029 20:40:43.537977 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:43.538368 kubelet[2455]: E1029 20:40:43.538277 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:43.541570 kubelet[2455]: E1029 20:40:43.541531 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:43.541687 kubelet[2455]: E1029 20:40:43.541655 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:44.433521 kubelet[2455]: I1029 20:40:44.432896 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 20:40:44.526335 kubelet[2455]: E1029 20:40:44.526275 2455 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 20:40:44.545046 kubelet[2455]: E1029 20:40:44.544900 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:44.545191 kubelet[2455]: E1029 20:40:44.545154 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 20:40:44.547475 kubelet[2455]: E1029 20:40:44.545321 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:44.547475 kubelet[2455]: E1029 20:40:44.545403 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:44.615193 kubelet[2455]: I1029 20:40:44.615147 2455 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 20:40:44.648274 kubelet[2455]: I1029 20:40:44.648226 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:44.704035 kubelet[2455]: E1029 20:40:44.703980 2455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:44.704035 kubelet[2455]: I1029 20:40:44.704016 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:44.705370 kubelet[2455]: E1029 20:40:44.705342 2455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:44.705370 kubelet[2455]: I1029 20:40:44.705358 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:44.706922 kubelet[2455]: E1029 20:40:44.706881 2455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:45.435360 kubelet[2455]: I1029 20:40:45.435313 2455 apiserver.go:52] "Watching apiserver" Oct 29 20:40:45.448542 kubelet[2455]: I1029 20:40:45.448492 2455 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 20:40:45.544364 kubelet[2455]: I1029 20:40:45.544319 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:45.550357 kubelet[2455]: E1029 20:40:45.550327 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:46.429845 systemd[1]: Reload requested from client PID 2748 ('systemctl') (unit session-10.scope)... Oct 29 20:40:46.429861 systemd[1]: Reloading... Oct 29 20:40:46.517484 zram_generator::config[2793]: No configuration found. Oct 29 20:40:46.546566 kubelet[2455]: E1029 20:40:46.546509 2455 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:46.753643 systemd[1]: Reloading finished in 323 ms. Oct 29 20:40:46.781965 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:46.804971 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 20:40:46.805230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:46.805284 systemd[1]: kubelet.service: Consumed 1.580s CPU time, 133.6M memory peak. Oct 29 20:40:46.807681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 20:40:47.025321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 20:40:47.035820 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 20:40:47.084076 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 20:40:47.084076 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 20:40:47.084076 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 20:40:47.084526 kubelet[2837]: I1029 20:40:47.084098 2837 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 20:40:47.090282 kubelet[2837]: I1029 20:40:47.090252 2837 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 20:40:47.090282 kubelet[2837]: I1029 20:40:47.090270 2837 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 20:40:47.090479 kubelet[2837]: I1029 20:40:47.090463 2837 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 20:40:47.091566 kubelet[2837]: I1029 20:40:47.091542 2837 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 20:40:47.093502 kubelet[2837]: I1029 20:40:47.093468 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 20:40:47.098874 kubelet[2837]: I1029 20:40:47.098852 2837 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 20:40:47.103706 kubelet[2837]: I1029 20:40:47.103664 2837 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 20:40:47.103908 kubelet[2837]: I1029 20:40:47.103870 2837 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 20:40:47.104069 kubelet[2837]: I1029 20:40:47.103896 2837 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 20:40:47.104069 kubelet[2837]: I1029 20:40:47.104067 2837 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 20:40:47.104170 kubelet[2837]: I1029 20:40:47.104076 2837 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 20:40:47.104170 kubelet[2837]: I1029 20:40:47.104124 2837 state_mem.go:36] "Initialized new in-memory state store" Oct 29 20:40:47.104286 kubelet[2837]: I1029 20:40:47.104262 2837 kubelet.go:480] "Attempting to sync node with API server" Oct 29 20:40:47.104286 kubelet[2837]: I1029 20:40:47.104278 2837 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 20:40:47.104423 kubelet[2837]: I1029 20:40:47.104304 2837 kubelet.go:386] "Adding apiserver pod source" Oct 29 20:40:47.104473 kubelet[2837]: I1029 20:40:47.104439 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 20:40:47.105524 kubelet[2837]: I1029 20:40:47.105486 2837 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 20:40:47.106078 kubelet[2837]: I1029 20:40:47.105947 2837 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 20:40:47.110130 kubelet[2837]: I1029 20:40:47.109650 2837 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 20:40:47.110130 kubelet[2837]: I1029 20:40:47.109799 2837 server.go:1289] "Started kubelet" Oct 29 20:40:47.110223 kubelet[2837]: I1029 20:40:47.110102 2837 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 20:40:47.110831 kubelet[2837]: I1029 20:40:47.110762 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 20:40:47.111807 kubelet[2837]: I1029 20:40:47.111767 2837 server.go:317] "Adding debug handlers to kubelet server" Oct 29 20:40:47.112024 kubelet[2837]: I1029 20:40:47.111989 2837 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 20:40:47.114279 kubelet[2837]: I1029 20:40:47.114115 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 20:40:47.116882 kubelet[2837]: I1029 20:40:47.116746 2837 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 20:40:47.117012 kubelet[2837]: I1029 20:40:47.116992 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 20:40:47.119437 kubelet[2837]: I1029 20:40:47.119008 2837 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 20:40:47.119437 kubelet[2837]: I1029 20:40:47.119165 2837 reconciler.go:26] "Reconciler: start to sync state" Oct 29 20:40:47.120617 kubelet[2837]: I1029 20:40:47.120146 2837 factory.go:223] Registration of the systemd container factory successfully Oct 29 20:40:47.120617 kubelet[2837]: I1029 20:40:47.120262 2837 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 20:40:47.122099 kubelet[2837]: E1029 20:40:47.122076 2837 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 20:40:47.122414 kubelet[2837]: I1029 20:40:47.122247 2837 factory.go:223] Registration of the containerd container factory successfully Oct 29 20:40:47.131902 kubelet[2837]: I1029 20:40:47.131820 2837 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 20:40:47.133182 kubelet[2837]: I1029 20:40:47.133157 2837 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 20:40:47.133182 kubelet[2837]: I1029 20:40:47.133180 2837 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 20:40:47.133244 kubelet[2837]: I1029 20:40:47.133200 2837 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 20:40:47.133244 kubelet[2837]: I1029 20:40:47.133208 2837 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 20:40:47.133297 kubelet[2837]: E1029 20:40:47.133246 2837 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 20:40:47.155647 kubelet[2837]: I1029 20:40:47.155610 2837 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 20:40:47.155647 kubelet[2837]: I1029 20:40:47.155626 2837 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 20:40:47.155647 kubelet[2837]: I1029 20:40:47.155646 2837 state_mem.go:36] "Initialized new in-memory state store" Oct 29 20:40:47.155810 kubelet[2837]: I1029 20:40:47.155767 2837 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 20:40:47.155810 kubelet[2837]: I1029 20:40:47.155780 2837 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 20:40:47.155810 kubelet[2837]: I1029 20:40:47.155796 2837 policy_none.go:49] "None policy: Start" Oct 29 20:40:47.155810 kubelet[2837]: I1029 20:40:47.155807 2837 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 20:40:47.155923 kubelet[2837]: I1029 20:40:47.155817 2837 state_mem.go:35] "Initializing new in-memory state store" Oct 29 20:40:47.155923 kubelet[2837]: I1029 20:40:47.155893 2837 state_mem.go:75] "Updated machine memory state" Oct 29 20:40:47.159812 kubelet[2837]: E1029 20:40:47.159776 2837 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 20:40:47.159952 kubelet[2837]: I1029 20:40:47.159935 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 20:40:47.159978 kubelet[2837]: I1029 20:40:47.159950 2837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 20:40:47.160132 kubelet[2837]: I1029 20:40:47.160101 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 20:40:47.161365 kubelet[2837]: E1029 20:40:47.161336 2837 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 20:40:47.234428 kubelet[2837]: I1029 20:40:47.234375 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:47.234428 kubelet[2837]: I1029 20:40:47.234383 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:47.234428 kubelet[2837]: I1029 20:40:47.234422 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:47.241478 kubelet[2837]: E1029 20:40:47.241429 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:47.267644 kubelet[2837]: I1029 20:40:47.267593 2837 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 20:40:47.273845 kubelet[2837]: I1029 20:40:47.273816 2837 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 20:40:47.273934 kubelet[2837]: I1029 20:40:47.273883 2837 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 20:40:47.420681 kubelet[2837]: I1029 20:40:47.420557 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/846f03da65f98531b75d819dce03006e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"846f03da65f98531b75d819dce03006e\") " pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:47.420681 kubelet[2837]: I1029 20:40:47.420596 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:47.420681 kubelet[2837]: I1029 20:40:47.420617 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:47.420681 kubelet[2837]: I1029 20:40:47.420634 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:47.420681 kubelet[2837]: I1029 20:40:47.420648 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/846f03da65f98531b75d819dce03006e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"846f03da65f98531b75d819dce03006e\") " pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:47.420926 kubelet[2837]: I1029 20:40:47.420676 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/846f03da65f98531b75d819dce03006e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"846f03da65f98531b75d819dce03006e\") " pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:47.420926 kubelet[2837]: I1029 20:40:47.420689 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:47.420926 kubelet[2837]: I1029 20:40:47.420702 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:47.420926 kubelet[2837]: I1029 20:40:47.420757 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 20:40:47.540144 kubelet[2837]: E1029 20:40:47.540092 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:47.542361 kubelet[2837]: E1029 20:40:47.542205 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:47.542361 kubelet[2837]: E1029 20:40:47.542290 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:48.143750 kubelet[2837]: I1029 20:40:48.143707 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:48.196677 update_engine[1608]: I20251029 20:40:48.196568 1608 update_attempter.cc:509] Updating boot flags... Oct 29 20:40:48.240659 kubelet[2837]: E1029 20:40:48.240620 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 20:40:48.240797 kubelet[2837]: E1029 20:40:48.240769 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:48.449667 kubelet[2837]: I1029 20:40:48.449116 2837 apiserver.go:52] "Watching apiserver" Oct 29 20:40:48.449667 kubelet[2837]: I1029 20:40:48.449582 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:48.455034 kubelet[2837]: E1029 20:40:48.454996 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:48.520139 kubelet[2837]: I1029 20:40:48.520080 2837 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 20:40:48.593472 kubelet[2837]: E1029 20:40:48.590887 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 20:40:48.595463 kubelet[2837]: E1029 20:40:48.594376 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:48.645521 kubelet[2837]: I1029 20:40:48.643284 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.643261287 podStartE2EDuration="1.643261287s" podCreationTimestamp="2025-10-29 20:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 20:40:48.633570749 +0000 UTC m=+1.592321628" watchObservedRunningTime="2025-10-29 20:40:48.643261287 +0000 UTC m=+1.602011995" Oct 29 20:40:48.645521 kubelet[2837]: I1029 20:40:48.643425 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.643422042 podStartE2EDuration="1.643422042s" podCreationTimestamp="2025-10-29 20:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 20:40:48.642672955 +0000 UTC m=+1.601423663" watchObservedRunningTime="2025-10-29 20:40:48.643422042 +0000 UTC m=+1.602172740" Oct 29 20:40:48.678478 kubelet[2837]: I1029 20:40:48.676683 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.676662695 podStartE2EDuration="3.676662695s" podCreationTimestamp="2025-10-29 20:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 20:40:48.664778772 +0000 UTC m=+1.623529480" watchObservedRunningTime="2025-10-29 20:40:48.676662695 +0000 UTC m=+1.635413393" Oct 29 20:40:49.145647 kubelet[2837]: E1029 20:40:49.145577 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:49.146303 kubelet[2837]: E1029 20:40:49.146226 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:50.014431 kubelet[2837]: E1029 20:40:50.014360 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:50.146651 kubelet[2837]: E1029 20:40:50.146607 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:51.275715 kubelet[2837]: E1029 20:40:51.275643 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:52.421487 kubelet[2837]: E1029 20:40:52.421262 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:53.151703 kubelet[2837]: E1029 20:40:53.151662 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:53.583088 kubelet[2837]: I1029 20:40:53.583042 2837 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 20:40:53.583689 containerd[1634]: time="2025-10-29T20:40:53.583629705Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 20:40:53.583936 kubelet[2837]: I1029 20:40:53.583923 2837 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 20:40:54.153201 kubelet[2837]: E1029 20:40:54.153162 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:54.246575 systemd[1]: Created slice kubepods-besteffort-pod9531675a_faaf_4f08_b620_3a906c40782d.slice - libcontainer container kubepods-besteffort-pod9531675a_faaf_4f08_b620_3a906c40782d.slice. Oct 29 20:40:54.283768 kubelet[2837]: I1029 20:40:54.283711 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9531675a-faaf-4f08-b620-3a906c40782d-xtables-lock\") pod \"kube-proxy-m4zqj\" (UID: \"9531675a-faaf-4f08-b620-3a906c40782d\") " pod="kube-system/kube-proxy-m4zqj" Oct 29 20:40:54.283768 kubelet[2837]: I1029 20:40:54.283751 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9531675a-faaf-4f08-b620-3a906c40782d-lib-modules\") pod \"kube-proxy-m4zqj\" (UID: \"9531675a-faaf-4f08-b620-3a906c40782d\") " pod="kube-system/kube-proxy-m4zqj" Oct 29 20:40:54.283984 kubelet[2837]: I1029 20:40:54.283788 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w59p\" (UniqueName: \"kubernetes.io/projected/9531675a-faaf-4f08-b620-3a906c40782d-kube-api-access-4w59p\") pod \"kube-proxy-m4zqj\" (UID: \"9531675a-faaf-4f08-b620-3a906c40782d\") " pod="kube-system/kube-proxy-m4zqj" Oct 29 20:40:54.283984 kubelet[2837]: I1029 20:40:54.283839 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9531675a-faaf-4f08-b620-3a906c40782d-kube-proxy\") pod \"kube-proxy-m4zqj\" (UID: \"9531675a-faaf-4f08-b620-3a906c40782d\") " pod="kube-system/kube-proxy-m4zqj" Oct 29 20:40:54.560171 kubelet[2837]: E1029 20:40:54.560102 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:54.560833 containerd[1634]: time="2025-10-29T20:40:54.560789819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4zqj,Uid:9531675a-faaf-4f08-b620-3a906c40782d,Namespace:kube-system,Attempt:0,}" Oct 29 20:40:54.604665 containerd[1634]: time="2025-10-29T20:40:54.604021822Z" level=info msg="connecting to shim 708fa779c3408530d00f1002f39ce2e1db7e44d89a120c79d5247065b77a9301" address="unix:///run/containerd/s/158180fd07062563e0aa466cfebd6ceb67dc82d217f91564a8e8acf864732349" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:40:54.678592 systemd[1]: Started cri-containerd-708fa779c3408530d00f1002f39ce2e1db7e44d89a120c79d5247065b77a9301.scope - libcontainer container 708fa779c3408530d00f1002f39ce2e1db7e44d89a120c79d5247065b77a9301. Oct 29 20:40:54.710317 containerd[1634]: time="2025-10-29T20:40:54.709944388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4zqj,Uid:9531675a-faaf-4f08-b620-3a906c40782d,Namespace:kube-system,Attempt:0,} returns sandbox id \"708fa779c3408530d00f1002f39ce2e1db7e44d89a120c79d5247065b77a9301\"" Oct 29 20:40:54.711687 kubelet[2837]: E1029 20:40:54.711644 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:54.724593 containerd[1634]: time="2025-10-29T20:40:54.724538112Z" level=info msg="CreateContainer within sandbox \"708fa779c3408530d00f1002f39ce2e1db7e44d89a120c79d5247065b77a9301\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 20:40:54.744540 containerd[1634]: time="2025-10-29T20:40:54.743925170Z" level=info msg="Container 0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:40:54.748573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494018385.mount: Deactivated successfully. Oct 29 20:40:54.757053 systemd[1]: Created slice kubepods-besteffort-pod63f4fd38_e2ba_4cc9_b00f_1777713a402a.slice - libcontainer container kubepods-besteffort-pod63f4fd38_e2ba_4cc9_b00f_1777713a402a.slice. Oct 29 20:40:54.759683 containerd[1634]: time="2025-10-29T20:40:54.759331702Z" level=info msg="CreateContainer within sandbox \"708fa779c3408530d00f1002f39ce2e1db7e44d89a120c79d5247065b77a9301\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73\"" Oct 29 20:40:54.761139 containerd[1634]: time="2025-10-29T20:40:54.760147074Z" level=info msg="StartContainer for \"0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73\"" Oct 29 20:40:54.761841 containerd[1634]: time="2025-10-29T20:40:54.761791835Z" level=info msg="connecting to shim 0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73" address="unix:///run/containerd/s/158180fd07062563e0aa466cfebd6ceb67dc82d217f91564a8e8acf864732349" protocol=ttrpc version=3 Oct 29 20:40:54.785651 systemd[1]: Started cri-containerd-0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73.scope - libcontainer container 0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73. Oct 29 20:40:54.788714 kubelet[2837]: I1029 20:40:54.788648 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjn6z\" (UniqueName: \"kubernetes.io/projected/63f4fd38-e2ba-4cc9-b00f-1777713a402a-kube-api-access-kjn6z\") pod \"tigera-operator-7dcd859c48-bvstg\" (UID: \"63f4fd38-e2ba-4cc9-b00f-1777713a402a\") " pod="tigera-operator/tigera-operator-7dcd859c48-bvstg" Oct 29 20:40:54.788714 kubelet[2837]: I1029 20:40:54.788697 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63f4fd38-e2ba-4cc9-b00f-1777713a402a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-bvstg\" (UID: \"63f4fd38-e2ba-4cc9-b00f-1777713a402a\") " pod="tigera-operator/tigera-operator-7dcd859c48-bvstg" Oct 29 20:40:54.837070 containerd[1634]: time="2025-10-29T20:40:54.836941056Z" level=info msg="StartContainer for \"0904fa3af6eaef835e31128d4d495a9315e9e8a9e72c2026344682129ebf5e73\" returns successfully" Oct 29 20:40:55.064584 containerd[1634]: time="2025-10-29T20:40:55.064535076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bvstg,Uid:63f4fd38-e2ba-4cc9-b00f-1777713a402a,Namespace:tigera-operator,Attempt:0,}" Oct 29 20:40:55.085774 containerd[1634]: time="2025-10-29T20:40:55.085712739Z" level=info msg="connecting to shim 6f90ca5edaf1c517bd751e22afc23f76f28105e0802518f46c293fc0c116c78c" address="unix:///run/containerd/s/0963b649017ccd97a2612cd869f3b4a79e05a5553e9c35b6a5c092c11650d301" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:40:55.118677 systemd[1]: Started cri-containerd-6f90ca5edaf1c517bd751e22afc23f76f28105e0802518f46c293fc0c116c78c.scope - libcontainer container 6f90ca5edaf1c517bd751e22afc23f76f28105e0802518f46c293fc0c116c78c. Oct 29 20:40:55.159255 kubelet[2837]: E1029 20:40:55.159198 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:40:55.167950 containerd[1634]: time="2025-10-29T20:40:55.167877566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bvstg,Uid:63f4fd38-e2ba-4cc9-b00f-1777713a402a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f90ca5edaf1c517bd751e22afc23f76f28105e0802518f46c293fc0c116c78c\"" Oct 29 20:40:55.170826 containerd[1634]: time="2025-10-29T20:40:55.170669915Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 20:40:56.720888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613436279.mount: Deactivated successfully. Oct 29 20:40:57.655310 containerd[1634]: time="2025-10-29T20:40:57.655223367Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:57.655989 containerd[1634]: time="2025-10-29T20:40:57.655924030Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 20:40:57.657689 containerd[1634]: time="2025-10-29T20:40:57.657652554Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:57.659842 containerd[1634]: time="2025-10-29T20:40:57.659810004Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:40:57.660518 containerd[1634]: time="2025-10-29T20:40:57.660470890Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.489740334s" Oct 29 20:40:57.660559 containerd[1634]: time="2025-10-29T20:40:57.660517993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 20:40:57.666317 containerd[1634]: time="2025-10-29T20:40:57.666260653Z" level=info msg="CreateContainer within sandbox \"6f90ca5edaf1c517bd751e22afc23f76f28105e0802518f46c293fc0c116c78c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 20:40:57.675684 containerd[1634]: time="2025-10-29T20:40:57.675621409Z" level=info msg="Container 1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:40:57.699280 containerd[1634]: time="2025-10-29T20:40:57.699230925Z" level=info msg="CreateContainer within sandbox \"6f90ca5edaf1c517bd751e22afc23f76f28105e0802518f46c293fc0c116c78c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531\"" Oct 29 20:40:57.699973 containerd[1634]: time="2025-10-29T20:40:57.699927510Z" level=info msg="StartContainer for \"1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531\"" Oct 29 20:40:57.701031 containerd[1634]: time="2025-10-29T20:40:57.701007964Z" level=info msg="connecting to shim 1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531" address="unix:///run/containerd/s/0963b649017ccd97a2612cd869f3b4a79e05a5553e9c35b6a5c092c11650d301" protocol=ttrpc version=3 Oct 29 20:40:57.737754 systemd[1]: Started cri-containerd-1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531.scope - libcontainer container 1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531. Oct 29 20:40:57.905874 containerd[1634]: time="2025-10-29T20:40:57.905705532Z" level=info msg="StartContainer for \"1e941a80d42c5ea7cbdbfc07855e0a333d5ab9acb648ba3a82b025c524b62531\" returns successfully" Oct 29 20:40:58.178836 kubelet[2837]: I1029 20:40:58.178667 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4zqj" podStartSLOduration=4.178648273 podStartE2EDuration="4.178648273s" podCreationTimestamp="2025-10-29 20:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 20:40:55.173643313 +0000 UTC m=+8.132394021" watchObservedRunningTime="2025-10-29 20:40:58.178648273 +0000 UTC m=+11.137398981" Oct 29 20:40:58.178836 kubelet[2837]: I1029 20:40:58.178805 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-bvstg" podStartSLOduration=1.6868250040000001 podStartE2EDuration="4.178799301s" podCreationTimestamp="2025-10-29 20:40:54 +0000 UTC" firstStartedPulling="2025-10-29 20:40:55.169558009 +0000 UTC m=+8.128308717" lastFinishedPulling="2025-10-29 20:40:57.661532306 +0000 UTC m=+10.620283014" observedRunningTime="2025-10-29 20:40:58.177981219 +0000 UTC m=+11.136731937" watchObservedRunningTime="2025-10-29 20:40:58.178799301 +0000 UTC m=+11.137549999" Oct 29 20:41:00.020486 kubelet[2837]: E1029 20:41:00.019760 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:01.280764 kubelet[2837]: E1029 20:41:01.280730 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:04.021137 sudo[1869]: pam_unix(sudo:session): session closed for user root Oct 29 20:41:04.022533 sshd[1868]: Connection closed by 10.0.0.1 port 43828 Oct 29 20:41:04.022897 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:04.028553 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:43828.service: Deactivated successfully. Oct 29 20:41:04.031210 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 20:41:04.031747 systemd[1]: session-10.scope: Consumed 4.831s CPU time, 223M memory peak. Oct 29 20:41:04.034846 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Oct 29 20:41:04.036408 systemd-logind[1604]: Removed session 10. Oct 29 20:41:08.335590 systemd[1]: Created slice kubepods-besteffort-pod84bec190_1ac5_4cfb_acce_a0b11f41cdbf.slice - libcontainer container kubepods-besteffort-pod84bec190_1ac5_4cfb_acce_a0b11f41cdbf.slice. Oct 29 20:41:08.373161 kubelet[2837]: I1029 20:41:08.373106 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/84bec190-1ac5-4cfb-acce-a0b11f41cdbf-typha-certs\") pod \"calico-typha-6d5c75fcc9-nvfgl\" (UID: \"84bec190-1ac5-4cfb-acce-a0b11f41cdbf\") " pod="calico-system/calico-typha-6d5c75fcc9-nvfgl" Oct 29 20:41:08.373161 kubelet[2837]: I1029 20:41:08.373154 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84bec190-1ac5-4cfb-acce-a0b11f41cdbf-tigera-ca-bundle\") pod \"calico-typha-6d5c75fcc9-nvfgl\" (UID: \"84bec190-1ac5-4cfb-acce-a0b11f41cdbf\") " pod="calico-system/calico-typha-6d5c75fcc9-nvfgl" Oct 29 20:41:08.373161 kubelet[2837]: I1029 20:41:08.373174 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8kwv\" (UniqueName: \"kubernetes.io/projected/84bec190-1ac5-4cfb-acce-a0b11f41cdbf-kube-api-access-m8kwv\") pod \"calico-typha-6d5c75fcc9-nvfgl\" (UID: \"84bec190-1ac5-4cfb-acce-a0b11f41cdbf\") " pod="calico-system/calico-typha-6d5c75fcc9-nvfgl" Oct 29 20:41:08.522052 systemd[1]: Created slice kubepods-besteffort-podffdb68ed_e039_40bb_8e95_939f0308efd6.slice - libcontainer container kubepods-besteffort-podffdb68ed_e039_40bb_8e95_939f0308efd6.slice. Oct 29 20:41:08.574379 kubelet[2837]: I1029 20:41:08.574331 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-policysync\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574379 kubelet[2837]: I1029 20:41:08.574365 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-var-lib-calico\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574379 kubelet[2837]: I1029 20:41:08.574382 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-cni-bin-dir\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574594 kubelet[2837]: I1029 20:41:08.574396 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-flexvol-driver-host\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574594 kubelet[2837]: I1029 20:41:08.574439 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-cni-net-dir\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574594 kubelet[2837]: I1029 20:41:08.574478 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ffdb68ed-e039-40bb-8e95-939f0308efd6-node-certs\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574594 kubelet[2837]: I1029 20:41:08.574494 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-lib-modules\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574594 kubelet[2837]: I1029 20:41:08.574509 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7mf\" (UniqueName: \"kubernetes.io/projected/ffdb68ed-e039-40bb-8e95-939f0308efd6-kube-api-access-zh7mf\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574711 kubelet[2837]: I1029 20:41:08.574527 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffdb68ed-e039-40bb-8e95-939f0308efd6-tigera-ca-bundle\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574711 kubelet[2837]: I1029 20:41:08.574540 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-var-run-calico\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574711 kubelet[2837]: I1029 20:41:08.574554 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-xtables-lock\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.574711 kubelet[2837]: I1029 20:41:08.574568 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ffdb68ed-e039-40bb-8e95-939f0308efd6-cni-log-dir\") pod \"calico-node-fnl8t\" (UID: \"ffdb68ed-e039-40bb-8e95-939f0308efd6\") " pod="calico-system/calico-node-fnl8t" Oct 29 20:41:08.639492 kubelet[2837]: E1029 20:41:08.639351 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:08.640110 containerd[1634]: time="2025-10-29T20:41:08.640058281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5c75fcc9-nvfgl,Uid:84bec190-1ac5-4cfb-acce-a0b11f41cdbf,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:08.676173 kubelet[2837]: E1029 20:41:08.676075 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.676173 kubelet[2837]: W1029 20:41:08.676103 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.723187 kubelet[2837]: E1029 20:41:08.723143 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.723461 kubelet[2837]: E1029 20:41:08.723432 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.723501 kubelet[2837]: W1029 20:41:08.723462 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.723501 kubelet[2837]: E1029 20:41:08.723478 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.723739 kubelet[2837]: E1029 20:41:08.723709 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.723739 kubelet[2837]: W1029 20:41:08.723721 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.723739 kubelet[2837]: E1029 20:41:08.723730 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.723958 kubelet[2837]: E1029 20:41:08.723941 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.723958 kubelet[2837]: W1029 20:41:08.723952 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.724030 kubelet[2837]: E1029 20:41:08.723960 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.724156 kubelet[2837]: E1029 20:41:08.724136 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.724156 kubelet[2837]: W1029 20:41:08.724147 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.724156 kubelet[2837]: E1029 20:41:08.724154 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.724354 kubelet[2837]: E1029 20:41:08.724338 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.724354 kubelet[2837]: W1029 20:41:08.724348 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.724354 kubelet[2837]: E1029 20:41:08.724355 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.724568 kubelet[2837]: E1029 20:41:08.724541 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.724568 kubelet[2837]: W1029 20:41:08.724551 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.724568 kubelet[2837]: E1029 20:41:08.724558 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.726214 kubelet[2837]: E1029 20:41:08.726195 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.726214 kubelet[2837]: W1029 20:41:08.726206 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.726214 kubelet[2837]: E1029 20:41:08.726215 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.726445 kubelet[2837]: E1029 20:41:08.726416 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.726445 kubelet[2837]: W1029 20:41:08.726429 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.726445 kubelet[2837]: E1029 20:41:08.726437 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.726639 kubelet[2837]: E1029 20:41:08.726623 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.726639 kubelet[2837]: W1029 20:41:08.726632 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.726697 kubelet[2837]: E1029 20:41:08.726642 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.726849 kubelet[2837]: E1029 20:41:08.726820 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.726849 kubelet[2837]: W1029 20:41:08.726835 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.726849 kubelet[2837]: E1029 20:41:08.726844 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.727056 kubelet[2837]: E1029 20:41:08.727037 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.727056 kubelet[2837]: W1029 20:41:08.727049 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.727115 kubelet[2837]: E1029 20:41:08.727059 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.727296 kubelet[2837]: E1029 20:41:08.727277 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.727296 kubelet[2837]: W1029 20:41:08.727290 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.727359 kubelet[2837]: E1029 20:41:08.727301 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.727547 kubelet[2837]: E1029 20:41:08.727525 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.727547 kubelet[2837]: W1029 20:41:08.727537 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.727547 kubelet[2837]: E1029 20:41:08.727545 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.727744 kubelet[2837]: E1029 20:41:08.727728 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.727744 kubelet[2837]: W1029 20:41:08.727739 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.727795 kubelet[2837]: E1029 20:41:08.727747 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.727952 kubelet[2837]: E1029 20:41:08.727938 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.727952 kubelet[2837]: W1029 20:41:08.727948 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.728035 kubelet[2837]: E1029 20:41:08.727955 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.728128 kubelet[2837]: E1029 20:41:08.728112 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.728128 kubelet[2837]: W1029 20:41:08.728122 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.728128 kubelet[2837]: E1029 20:41:08.728129 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.728298 kubelet[2837]: E1029 20:41:08.728284 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.728298 kubelet[2837]: W1029 20:41:08.728293 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.728400 kubelet[2837]: E1029 20:41:08.728301 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.728524 kubelet[2837]: E1029 20:41:08.728507 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.728524 kubelet[2837]: W1029 20:41:08.728520 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.728593 kubelet[2837]: E1029 20:41:08.728530 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.728737 kubelet[2837]: E1029 20:41:08.728720 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.728737 kubelet[2837]: W1029 20:41:08.728730 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.728798 kubelet[2837]: E1029 20:41:08.728740 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.728912 kubelet[2837]: E1029 20:41:08.728898 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.728912 kubelet[2837]: W1029 20:41:08.728907 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.728970 kubelet[2837]: E1029 20:41:08.728915 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.729099 kubelet[2837]: E1029 20:41:08.729084 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.729099 kubelet[2837]: W1029 20:41:08.729094 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.729160 kubelet[2837]: E1029 20:41:08.729102 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.729316 kubelet[2837]: E1029 20:41:08.729301 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.729316 kubelet[2837]: W1029 20:41:08.729311 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.729366 kubelet[2837]: E1029 20:41:08.729320 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.729564 kubelet[2837]: E1029 20:41:08.729548 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.729564 kubelet[2837]: W1029 20:41:08.729560 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.729616 kubelet[2837]: E1029 20:41:08.729568 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.729918 kubelet[2837]: E1029 20:41:08.729895 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.729918 kubelet[2837]: W1029 20:41:08.729914 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.729999 kubelet[2837]: E1029 20:41:08.729941 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.730142 kubelet[2837]: E1029 20:41:08.730129 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.730142 kubelet[2837]: W1029 20:41:08.730138 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.730205 kubelet[2837]: E1029 20:41:08.730146 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.732853 kubelet[2837]: E1029 20:41:08.732838 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.732853 kubelet[2837]: W1029 20:41:08.732850 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.732921 kubelet[2837]: E1029 20:41:08.732860 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.755341 kubelet[2837]: E1029 20:41:08.755278 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:08.755341 kubelet[2837]: W1029 20:41:08.755331 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:08.755508 kubelet[2837]: E1029 20:41:08.755354 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:08.825613 kubelet[2837]: E1029 20:41:08.825574 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:08.826063 containerd[1634]: time="2025-10-29T20:41:08.826019979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnl8t,Uid:ffdb68ed-e039-40bb-8e95-939f0308efd6,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:09.606815 containerd[1634]: time="2025-10-29T20:41:09.606756662Z" level=info msg="connecting to shim 4a6a5b4783d4555c23a34344d093dc2d767783c8e9858ab1cdfe511f4d256bf6" address="unix:///run/containerd/s/0c2b25211ef0366959498d30749cddec7a048da0820f3ef6fab9c56efdf7ed3f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:09.644617 systemd[1]: Started cri-containerd-4a6a5b4783d4555c23a34344d093dc2d767783c8e9858ab1cdfe511f4d256bf6.scope - libcontainer container 4a6a5b4783d4555c23a34344d093dc2d767783c8e9858ab1cdfe511f4d256bf6. Oct 29 20:41:09.662243 kubelet[2837]: E1029 20:41:09.661360 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:09.707032 containerd[1634]: time="2025-10-29T20:41:09.706823691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d5c75fcc9-nvfgl,Uid:84bec190-1ac5-4cfb-acce-a0b11f41cdbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a6a5b4783d4555c23a34344d093dc2d767783c8e9858ab1cdfe511f4d256bf6\"" Oct 29 20:41:09.708778 kubelet[2837]: E1029 20:41:09.708389 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:09.712544 containerd[1634]: time="2025-10-29T20:41:09.709463261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 20:41:09.728593 containerd[1634]: time="2025-10-29T20:41:09.728529519Z" level=info msg="connecting to shim c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1" address="unix:///run/containerd/s/10a543e40149436cb441c8bae19650b31e45ff49712b6047250fe020f82a2b63" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:09.746878 kubelet[2837]: E1029 20:41:09.746841 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.746878 kubelet[2837]: W1029 20:41:09.746865 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.747058 kubelet[2837]: E1029 20:41:09.746909 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.747377 kubelet[2837]: E1029 20:41:09.747362 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.747377 kubelet[2837]: W1029 20:41:09.747374 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.747459 kubelet[2837]: E1029 20:41:09.747383 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.747595 kubelet[2837]: E1029 20:41:09.747558 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.747595 kubelet[2837]: W1029 20:41:09.747568 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.747595 kubelet[2837]: E1029 20:41:09.747577 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.748240 kubelet[2837]: E1029 20:41:09.748227 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.748240 kubelet[2837]: W1029 20:41:09.748239 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.748326 kubelet[2837]: E1029 20:41:09.748248 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.748497 kubelet[2837]: E1029 20:41:09.748430 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.748497 kubelet[2837]: W1029 20:41:09.748440 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.748497 kubelet[2837]: E1029 20:41:09.748472 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.748631 kubelet[2837]: E1029 20:41:09.748616 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.748631 kubelet[2837]: W1029 20:41:09.748624 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.748631 kubelet[2837]: E1029 20:41:09.748632 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.748797 kubelet[2837]: E1029 20:41:09.748785 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.748797 kubelet[2837]: W1029 20:41:09.748795 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.748981 kubelet[2837]: E1029 20:41:09.748803 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.749928 kubelet[2837]: E1029 20:41:09.749911 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.749998 kubelet[2837]: W1029 20:41:09.749960 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.749998 kubelet[2837]: E1029 20:41:09.749979 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.750483 kubelet[2837]: E1029 20:41:09.750411 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.750752 kubelet[2837]: W1029 20:41:09.750534 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.750752 kubelet[2837]: E1029 20:41:09.750548 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.750997 kubelet[2837]: E1029 20:41:09.750978 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.750997 kubelet[2837]: W1029 20:41:09.750990 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.751067 kubelet[2837]: E1029 20:41:09.751033 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.751336 kubelet[2837]: E1029 20:41:09.751319 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.751382 kubelet[2837]: W1029 20:41:09.751360 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.751382 kubelet[2837]: E1029 20:41:09.751371 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.751644 kubelet[2837]: E1029 20:41:09.751625 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.751644 kubelet[2837]: W1029 20:41:09.751639 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.751731 kubelet[2837]: E1029 20:41:09.751650 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.752310 kubelet[2837]: E1029 20:41:09.751829 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.752310 kubelet[2837]: W1029 20:41:09.751839 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.752310 kubelet[2837]: E1029 20:41:09.751848 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.752310 kubelet[2837]: E1029 20:41:09.752013 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.752310 kubelet[2837]: W1029 20:41:09.752021 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.752310 kubelet[2837]: E1029 20:41:09.752032 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.752723 kubelet[2837]: E1029 20:41:09.752701 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.752723 kubelet[2837]: W1029 20:41:09.752714 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.752723 kubelet[2837]: E1029 20:41:09.752724 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.754012 kubelet[2837]: E1029 20:41:09.753992 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.754012 kubelet[2837]: W1029 20:41:09.754006 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.754086 kubelet[2837]: E1029 20:41:09.754016 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.754225 kubelet[2837]: E1029 20:41:09.754207 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.754225 kubelet[2837]: W1029 20:41:09.754219 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.754282 kubelet[2837]: E1029 20:41:09.754228 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.754479 kubelet[2837]: E1029 20:41:09.754380 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.754479 kubelet[2837]: W1029 20:41:09.754390 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.754479 kubelet[2837]: E1029 20:41:09.754399 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.754915 kubelet[2837]: E1029 20:41:09.754612 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.754915 kubelet[2837]: W1029 20:41:09.754622 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.754915 kubelet[2837]: E1029 20:41:09.754631 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.754915 kubelet[2837]: E1029 20:41:09.754819 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.754915 kubelet[2837]: W1029 20:41:09.754826 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.754915 kubelet[2837]: E1029 20:41:09.754836 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.770030 systemd[1]: Started cri-containerd-c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1.scope - libcontainer container c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1. Oct 29 20:41:09.785133 kubelet[2837]: E1029 20:41:09.785086 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.785133 kubelet[2837]: W1029 20:41:09.785111 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.785133 kubelet[2837]: E1029 20:41:09.785133 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.785305 kubelet[2837]: I1029 20:41:09.785162 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b38e512-971c-4a46-9cd5-db7a73bc8089-socket-dir\") pod \"csi-node-driver-hphmg\" (UID: \"9b38e512-971c-4a46-9cd5-db7a73bc8089\") " pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:09.786045 kubelet[2837]: E1029 20:41:09.786020 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.786045 kubelet[2837]: W1029 20:41:09.786035 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.786045 kubelet[2837]: E1029 20:41:09.786045 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.786751 kubelet[2837]: I1029 20:41:09.786069 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b38e512-971c-4a46-9cd5-db7a73bc8089-varrun\") pod \"csi-node-driver-hphmg\" (UID: \"9b38e512-971c-4a46-9cd5-db7a73bc8089\") " pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:09.786751 kubelet[2837]: E1029 20:41:09.786344 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.786751 kubelet[2837]: W1029 20:41:09.786368 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.786751 kubelet[2837]: E1029 20:41:09.786390 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.786751 kubelet[2837]: E1029 20:41:09.786714 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.786751 kubelet[2837]: W1029 20:41:09.786723 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.786751 kubelet[2837]: E1029 20:41:09.786732 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.786998 kubelet[2837]: E1029 20:41:09.786920 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.786998 kubelet[2837]: W1029 20:41:09.786927 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.786998 kubelet[2837]: E1029 20:41:09.786953 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.787146 kubelet[2837]: I1029 20:41:09.786998 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbvt\" (UniqueName: \"kubernetes.io/projected/9b38e512-971c-4a46-9cd5-db7a73bc8089-kube-api-access-klbvt\") pod \"csi-node-driver-hphmg\" (UID: \"9b38e512-971c-4a46-9cd5-db7a73bc8089\") " pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:09.787618 kubelet[2837]: E1029 20:41:09.787239 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.787618 kubelet[2837]: W1029 20:41:09.787251 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.787618 kubelet[2837]: E1029 20:41:09.787261 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.787826 kubelet[2837]: E1029 20:41:09.787814 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.788173 kubelet[2837]: W1029 20:41:09.788034 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.788173 kubelet[2837]: E1029 20:41:09.788051 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.788411 kubelet[2837]: E1029 20:41:09.788399 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.788594 kubelet[2837]: W1029 20:41:09.788479 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.788594 kubelet[2837]: E1029 20:41:09.788491 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.788594 kubelet[2837]: I1029 20:41:09.788514 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b38e512-971c-4a46-9cd5-db7a73bc8089-kubelet-dir\") pod \"csi-node-driver-hphmg\" (UID: \"9b38e512-971c-4a46-9cd5-db7a73bc8089\") " pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:09.788924 kubelet[2837]: E1029 20:41:09.788801 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.788924 kubelet[2837]: W1029 20:41:09.788812 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.788924 kubelet[2837]: E1029 20:41:09.788822 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.789030 kubelet[2837]: I1029 20:41:09.788876 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b38e512-971c-4a46-9cd5-db7a73bc8089-registration-dir\") pod \"csi-node-driver-hphmg\" (UID: \"9b38e512-971c-4a46-9cd5-db7a73bc8089\") " pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:09.789178 kubelet[2837]: E1029 20:41:09.789127 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.789178 kubelet[2837]: W1029 20:41:09.789137 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.789178 kubelet[2837]: E1029 20:41:09.789146 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.789559 kubelet[2837]: E1029 20:41:09.789502 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.789559 kubelet[2837]: W1029 20:41:09.789512 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.789559 kubelet[2837]: E1029 20:41:09.789521 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.789935 kubelet[2837]: E1029 20:41:09.789836 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.789935 kubelet[2837]: W1029 20:41:09.789846 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.789935 kubelet[2837]: E1029 20:41:09.789854 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.790085 kubelet[2837]: E1029 20:41:09.790074 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.790153 kubelet[2837]: W1029 20:41:09.790144 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.790305 kubelet[2837]: E1029 20:41:09.790200 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.790406 kubelet[2837]: E1029 20:41:09.790395 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.790611 kubelet[2837]: W1029 20:41:09.790506 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.790611 kubelet[2837]: E1029 20:41:09.790520 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.790803 kubelet[2837]: E1029 20:41:09.790792 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.790893 kubelet[2837]: W1029 20:41:09.790844 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.790893 kubelet[2837]: E1029 20:41:09.790875 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.804943 containerd[1634]: time="2025-10-29T20:41:09.804814936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnl8t,Uid:ffdb68ed-e039-40bb-8e95-939f0308efd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\"" Oct 29 20:41:09.805769 kubelet[2837]: E1029 20:41:09.805738 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:09.890488 kubelet[2837]: E1029 20:41:09.890326 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.890488 kubelet[2837]: W1029 20:41:09.890351 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.890488 kubelet[2837]: E1029 20:41:09.890373 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.890695 kubelet[2837]: E1029 20:41:09.890579 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.890695 kubelet[2837]: W1029 20:41:09.890589 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.890695 kubelet[2837]: E1029 20:41:09.890596 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.890867 kubelet[2837]: E1029 20:41:09.890846 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.890867 kubelet[2837]: W1029 20:41:09.890864 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.890942 kubelet[2837]: E1029 20:41:09.890876 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.891216 kubelet[2837]: E1029 20:41:09.891090 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.891216 kubelet[2837]: W1029 20:41:09.891102 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.891216 kubelet[2837]: E1029 20:41:09.891111 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.891420 kubelet[2837]: E1029 20:41:09.891403 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.891420 kubelet[2837]: W1029 20:41:09.891414 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.891420 kubelet[2837]: E1029 20:41:09.891423 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.891810 kubelet[2837]: E1029 20:41:09.891773 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.891810 kubelet[2837]: W1029 20:41:09.891793 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.891810 kubelet[2837]: E1029 20:41:09.891806 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.892181 kubelet[2837]: E1029 20:41:09.892164 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.892181 kubelet[2837]: W1029 20:41:09.892176 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.892181 kubelet[2837]: E1029 20:41:09.892185 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.892506 kubelet[2837]: E1029 20:41:09.892488 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.892506 kubelet[2837]: W1029 20:41:09.892503 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.892648 kubelet[2837]: E1029 20:41:09.892533 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.892812 kubelet[2837]: E1029 20:41:09.892772 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.892812 kubelet[2837]: W1029 20:41:09.892786 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.892812 kubelet[2837]: E1029 20:41:09.892795 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.893100 kubelet[2837]: E1029 20:41:09.892982 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.893100 kubelet[2837]: W1029 20:41:09.892992 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.893100 kubelet[2837]: E1029 20:41:09.893001 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.893227 kubelet[2837]: E1029 20:41:09.893219 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.893260 kubelet[2837]: W1029 20:41:09.893228 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.893260 kubelet[2837]: E1029 20:41:09.893239 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.893438 kubelet[2837]: E1029 20:41:09.893422 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.893438 kubelet[2837]: W1029 20:41:09.893435 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.893521 kubelet[2837]: E1029 20:41:09.893445 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.893706 kubelet[2837]: E1029 20:41:09.893681 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.893706 kubelet[2837]: W1029 20:41:09.893695 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.893754 kubelet[2837]: E1029 20:41:09.893705 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.893895 kubelet[2837]: E1029 20:41:09.893877 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.893895 kubelet[2837]: W1029 20:41:09.893889 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.893937 kubelet[2837]: E1029 20:41:09.893898 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.894091 kubelet[2837]: E1029 20:41:09.894077 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.894091 kubelet[2837]: W1029 20:41:09.894088 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.894142 kubelet[2837]: E1029 20:41:09.894097 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.894286 kubelet[2837]: E1029 20:41:09.894272 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.894286 kubelet[2837]: W1029 20:41:09.894283 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.894334 kubelet[2837]: E1029 20:41:09.894292 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.894517 kubelet[2837]: E1029 20:41:09.894504 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.894542 kubelet[2837]: W1029 20:41:09.894517 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.894542 kubelet[2837]: E1029 20:41:09.894527 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.894691 kubelet[2837]: E1029 20:41:09.894679 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.894691 kubelet[2837]: W1029 20:41:09.894688 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.894734 kubelet[2837]: E1029 20:41:09.894696 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.894860 kubelet[2837]: E1029 20:41:09.894848 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.894881 kubelet[2837]: W1029 20:41:09.894858 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.894881 kubelet[2837]: E1029 20:41:09.894867 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.895066 kubelet[2837]: E1029 20:41:09.895055 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.895089 kubelet[2837]: W1029 20:41:09.895065 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.895089 kubelet[2837]: E1029 20:41:09.895074 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.895260 kubelet[2837]: E1029 20:41:09.895247 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.895260 kubelet[2837]: W1029 20:41:09.895257 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.895314 kubelet[2837]: E1029 20:41:09.895267 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.895476 kubelet[2837]: E1029 20:41:09.895464 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.895476 kubelet[2837]: W1029 20:41:09.895474 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.895517 kubelet[2837]: E1029 20:41:09.895483 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.895653 kubelet[2837]: E1029 20:41:09.895638 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.895653 kubelet[2837]: W1029 20:41:09.895649 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.895717 kubelet[2837]: E1029 20:41:09.895656 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.895836 kubelet[2837]: E1029 20:41:09.895823 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.895836 kubelet[2837]: W1029 20:41:09.895833 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.895887 kubelet[2837]: E1029 20:41:09.895841 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.896036 kubelet[2837]: E1029 20:41:09.896023 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.896036 kubelet[2837]: W1029 20:41:09.896034 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.896092 kubelet[2837]: E1029 20:41:09.896041 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:09.905212 kubelet[2837]: E1029 20:41:09.905188 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:09.905212 kubelet[2837]: W1029 20:41:09.905207 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:09.905319 kubelet[2837]: E1029 20:41:09.905225 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:11.136133 kubelet[2837]: E1029 20:41:11.136035 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:11.322493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540834020.mount: Deactivated successfully. Oct 29 20:41:11.783412 containerd[1634]: time="2025-10-29T20:41:11.783345573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:11.784146 containerd[1634]: time="2025-10-29T20:41:11.784106316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 29 20:41:11.785473 containerd[1634]: time="2025-10-29T20:41:11.785412254Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:11.787397 containerd[1634]: time="2025-10-29T20:41:11.787346789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:11.787977 containerd[1634]: time="2025-10-29T20:41:11.787945248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.078442019s" Oct 29 20:41:11.788041 containerd[1634]: time="2025-10-29T20:41:11.787980526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 20:41:11.788895 containerd[1634]: time="2025-10-29T20:41:11.788859287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 20:41:11.807554 containerd[1634]: time="2025-10-29T20:41:11.807498469Z" level=info msg="CreateContainer within sandbox \"4a6a5b4783d4555c23a34344d093dc2d767783c8e9858ab1cdfe511f4d256bf6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 20:41:11.816578 containerd[1634]: time="2025-10-29T20:41:11.816525884Z" level=info msg="Container d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:41:11.823562 containerd[1634]: time="2025-10-29T20:41:11.823503713Z" level=info msg="CreateContainer within sandbox \"4a6a5b4783d4555c23a34344d093dc2d767783c8e9858ab1cdfe511f4d256bf6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624\"" Oct 29 20:41:11.824376 containerd[1634]: time="2025-10-29T20:41:11.824331515Z" level=info msg="StartContainer for \"d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624\"" Oct 29 20:41:11.825814 containerd[1634]: time="2025-10-29T20:41:11.825767094Z" level=info msg="connecting to shim d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624" address="unix:///run/containerd/s/0c2b25211ef0366959498d30749cddec7a048da0820f3ef6fab9c56efdf7ed3f" protocol=ttrpc version=3 Oct 29 20:41:11.856644 systemd[1]: Started cri-containerd-d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624.scope - libcontainer container d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624. Oct 29 20:41:12.084017 containerd[1634]: time="2025-10-29T20:41:12.083892686Z" level=info msg="StartContainer for \"d488d85018bf7be24e69e5bf74431cc8aa081ce9dec2998b726408f6d4e64624\" returns successfully" Oct 29 20:41:12.206684 kubelet[2837]: E1029 20:41:12.205988 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:12.221224 kubelet[2837]: I1029 20:41:12.221140 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d5c75fcc9-nvfgl" podStartSLOduration=2.141633345 podStartE2EDuration="4.22112077s" podCreationTimestamp="2025-10-29 20:41:08 +0000 UTC" firstStartedPulling="2025-10-29 20:41:09.709233254 +0000 UTC m=+22.667983962" lastFinishedPulling="2025-10-29 20:41:11.788720669 +0000 UTC m=+24.747471387" observedRunningTime="2025-10-29 20:41:12.220707351 +0000 UTC m=+25.179458059" watchObservedRunningTime="2025-10-29 20:41:12.22112077 +0000 UTC m=+25.179871478" Oct 29 20:41:12.273625 kubelet[2837]: E1029 20:41:12.273565 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.273625 kubelet[2837]: W1029 20:41:12.273615 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.273804 kubelet[2837]: E1029 20:41:12.273660 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.274057 kubelet[2837]: E1029 20:41:12.274014 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.274057 kubelet[2837]: W1029 20:41:12.274032 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.274057 kubelet[2837]: E1029 20:41:12.274042 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.274329 kubelet[2837]: E1029 20:41:12.274307 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.274380 kubelet[2837]: W1029 20:41:12.274326 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.274380 kubelet[2837]: E1029 20:41:12.274340 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.274705 kubelet[2837]: E1029 20:41:12.274677 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.274757 kubelet[2837]: W1029 20:41:12.274703 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.274757 kubelet[2837]: E1029 20:41:12.274729 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.275050 kubelet[2837]: E1029 20:41:12.275026 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.275050 kubelet[2837]: W1029 20:41:12.275041 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.275151 kubelet[2837]: E1029 20:41:12.275059 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.275253 kubelet[2837]: E1029 20:41:12.275240 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.275253 kubelet[2837]: W1029 20:41:12.275250 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.275337 kubelet[2837]: E1029 20:41:12.275257 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.275469 kubelet[2837]: E1029 20:41:12.275443 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.275469 kubelet[2837]: W1029 20:41:12.275466 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.275571 kubelet[2837]: E1029 20:41:12.275474 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.275675 kubelet[2837]: E1029 20:41:12.275661 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.275675 kubelet[2837]: W1029 20:41:12.275670 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.275755 kubelet[2837]: E1029 20:41:12.275678 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.275880 kubelet[2837]: E1029 20:41:12.275867 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.275880 kubelet[2837]: W1029 20:41:12.275876 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.275948 kubelet[2837]: E1029 20:41:12.275883 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.276278 kubelet[2837]: E1029 20:41:12.276237 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.276278 kubelet[2837]: W1029 20:41:12.276265 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.276278 kubelet[2837]: E1029 20:41:12.276286 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.276724 kubelet[2837]: E1029 20:41:12.276692 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.276724 kubelet[2837]: W1029 20:41:12.276718 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.276873 kubelet[2837]: E1029 20:41:12.276737 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.277068 kubelet[2837]: E1029 20:41:12.277026 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.277068 kubelet[2837]: W1029 20:41:12.277061 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.277157 kubelet[2837]: E1029 20:41:12.277089 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.277368 kubelet[2837]: E1029 20:41:12.277351 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.277414 kubelet[2837]: W1029 20:41:12.277373 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.277414 kubelet[2837]: E1029 20:41:12.277381 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.277603 kubelet[2837]: E1029 20:41:12.277588 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.277603 kubelet[2837]: W1029 20:41:12.277598 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.277675 kubelet[2837]: E1029 20:41:12.277606 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.277842 kubelet[2837]: E1029 20:41:12.277816 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.277842 kubelet[2837]: W1029 20:41:12.277840 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.277912 kubelet[2837]: E1029 20:41:12.277861 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.310752 kubelet[2837]: E1029 20:41:12.310714 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.310752 kubelet[2837]: W1029 20:41:12.310730 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.310752 kubelet[2837]: E1029 20:41:12.310747 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.310973 kubelet[2837]: E1029 20:41:12.310944 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.310973 kubelet[2837]: W1029 20:41:12.310951 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.310973 kubelet[2837]: E1029 20:41:12.310960 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.311166 kubelet[2837]: E1029 20:41:12.311150 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.311166 kubelet[2837]: W1029 20:41:12.311160 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.311166 kubelet[2837]: E1029 20:41:12.311167 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.311489 kubelet[2837]: E1029 20:41:12.311443 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.311489 kubelet[2837]: W1029 20:41:12.311480 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.311620 kubelet[2837]: E1029 20:41:12.311501 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.311754 kubelet[2837]: E1029 20:41:12.311734 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.311754 kubelet[2837]: W1029 20:41:12.311745 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.311754 kubelet[2837]: E1029 20:41:12.311753 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.311922 kubelet[2837]: E1029 20:41:12.311908 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.311922 kubelet[2837]: W1029 20:41:12.311918 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.311977 kubelet[2837]: E1029 20:41:12.311925 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.312144 kubelet[2837]: E1029 20:41:12.312130 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.312144 kubelet[2837]: W1029 20:41:12.312140 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.312210 kubelet[2837]: E1029 20:41:12.312148 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.312342 kubelet[2837]: E1029 20:41:12.312327 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.312342 kubelet[2837]: W1029 20:41:12.312338 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.312342 kubelet[2837]: E1029 20:41:12.312345 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.312538 kubelet[2837]: E1029 20:41:12.312523 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.312538 kubelet[2837]: W1029 20:41:12.312533 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.312602 kubelet[2837]: E1029 20:41:12.312541 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.312719 kubelet[2837]: E1029 20:41:12.312705 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.312719 kubelet[2837]: W1029 20:41:12.312714 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.312772 kubelet[2837]: E1029 20:41:12.312721 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.312890 kubelet[2837]: E1029 20:41:12.312876 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.312890 kubelet[2837]: W1029 20:41:12.312886 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.312934 kubelet[2837]: E1029 20:41:12.312893 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.313098 kubelet[2837]: E1029 20:41:12.313084 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.313098 kubelet[2837]: W1029 20:41:12.313094 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.313156 kubelet[2837]: E1029 20:41:12.313101 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.313392 kubelet[2837]: E1029 20:41:12.313374 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.313392 kubelet[2837]: W1029 20:41:12.313388 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.313442 kubelet[2837]: E1029 20:41:12.313398 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.313641 kubelet[2837]: E1029 20:41:12.313627 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.313641 kubelet[2837]: W1029 20:41:12.313637 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.313689 kubelet[2837]: E1029 20:41:12.313644 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.313812 kubelet[2837]: E1029 20:41:12.313799 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.313812 kubelet[2837]: W1029 20:41:12.313808 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.313859 kubelet[2837]: E1029 20:41:12.313816 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.314001 kubelet[2837]: E1029 20:41:12.313988 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.314001 kubelet[2837]: W1029 20:41:12.313998 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.314061 kubelet[2837]: E1029 20:41:12.314006 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.314953 kubelet[2837]: E1029 20:41:12.314631 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.314953 kubelet[2837]: W1029 20:41:12.314646 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.314953 kubelet[2837]: E1029 20:41:12.314657 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:12.320668 kubelet[2837]: E1029 20:41:12.320641 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 20:41:12.320668 kubelet[2837]: W1029 20:41:12.320662 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 20:41:12.320740 kubelet[2837]: E1029 20:41:12.320675 2837 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 20:41:13.059794 containerd[1634]: time="2025-10-29T20:41:13.059733526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:13.060563 containerd[1634]: time="2025-10-29T20:41:13.060495188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 29 20:41:13.061693 containerd[1634]: time="2025-10-29T20:41:13.061661021Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:13.063627 containerd[1634]: time="2025-10-29T20:41:13.063598195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:13.064139 containerd[1634]: time="2025-10-29T20:41:13.064080968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.275183598s" Oct 29 20:41:13.064139 containerd[1634]: time="2025-10-29T20:41:13.064126295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 20:41:13.068204 containerd[1634]: time="2025-10-29T20:41:13.068133199Z" level=info msg="CreateContainer within sandbox \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 20:41:13.077020 containerd[1634]: time="2025-10-29T20:41:13.076963926Z" level=info msg="Container 458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:41:13.085202 containerd[1634]: time="2025-10-29T20:41:13.085154545Z" level=info msg="CreateContainer within sandbox \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\"" Oct 29 20:41:13.085630 containerd[1634]: time="2025-10-29T20:41:13.085603263Z" level=info msg="StartContainer for \"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\"" Oct 29 20:41:13.086917 containerd[1634]: time="2025-10-29T20:41:13.086890820Z" level=info msg="connecting to shim 458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d" address="unix:///run/containerd/s/10a543e40149436cb441c8bae19650b31e45ff49712b6047250fe020f82a2b63" protocol=ttrpc version=3 Oct 29 20:41:13.121630 systemd[1]: Started cri-containerd-458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d.scope - libcontainer container 458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d. Oct 29 20:41:13.134311 kubelet[2837]: E1029 20:41:13.134246 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:13.172429 containerd[1634]: time="2025-10-29T20:41:13.172348288Z" level=info msg="StartContainer for \"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\" returns successfully" Oct 29 20:41:13.180824 systemd[1]: cri-containerd-458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d.scope: Deactivated successfully. Oct 29 20:41:13.182650 containerd[1634]: time="2025-10-29T20:41:13.182620300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\" id:\"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\" pid:3572 exited_at:{seconds:1761770473 nanos:182092270}" Oct 29 20:41:13.182765 containerd[1634]: time="2025-10-29T20:41:13.182724261Z" level=info msg="received exit event container_id:\"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\" id:\"458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d\" pid:3572 exited_at:{seconds:1761770473 nanos:182092270}" Oct 29 20:41:13.207920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-458f89678baa63186cf813de9132005b1f73831254affeeb7d6cb8c9125b160d-rootfs.mount: Deactivated successfully. Oct 29 20:41:13.214141 kubelet[2837]: I1029 20:41:13.213263 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 20:41:13.219422 kubelet[2837]: E1029 20:41:13.216401 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:13.219422 kubelet[2837]: E1029 20:41:13.216405 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:14.217374 kubelet[2837]: E1029 20:41:14.217339 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:14.218155 containerd[1634]: time="2025-10-29T20:41:14.218120375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 20:41:15.134713 kubelet[2837]: E1029 20:41:15.134662 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:17.133949 kubelet[2837]: E1029 20:41:17.133883 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:17.369520 containerd[1634]: time="2025-10-29T20:41:17.369420213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:17.399701 containerd[1634]: time="2025-10-29T20:41:17.399617492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 20:41:17.418464 containerd[1634]: time="2025-10-29T20:41:17.418384863Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:17.440279 containerd[1634]: time="2025-10-29T20:41:17.440226639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:17.440960 containerd[1634]: time="2025-10-29T20:41:17.440919484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.222761557s" Oct 29 20:41:17.440960 containerd[1634]: time="2025-10-29T20:41:17.440947298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 20:41:17.543956 containerd[1634]: time="2025-10-29T20:41:17.543899118Z" level=info msg="CreateContainer within sandbox \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 20:41:17.812009 containerd[1634]: time="2025-10-29T20:41:17.811915138Z" level=info msg="Container 44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:41:17.823599 containerd[1634]: time="2025-10-29T20:41:17.823529642Z" level=info msg="CreateContainer within sandbox \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\"" Oct 29 20:41:17.824230 containerd[1634]: time="2025-10-29T20:41:17.824192150Z" level=info msg="StartContainer for \"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\"" Oct 29 20:41:17.826140 containerd[1634]: time="2025-10-29T20:41:17.826104505Z" level=info msg="connecting to shim 44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64" address="unix:///run/containerd/s/10a543e40149436cb441c8bae19650b31e45ff49712b6047250fe020f82a2b63" protocol=ttrpc version=3 Oct 29 20:41:17.857661 systemd[1]: Started cri-containerd-44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64.scope - libcontainer container 44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64. Oct 29 20:41:17.905092 containerd[1634]: time="2025-10-29T20:41:17.905044727Z" level=info msg="StartContainer for \"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\" returns successfully" Oct 29 20:41:18.227980 kubelet[2837]: E1029 20:41:18.227917 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:19.134531 kubelet[2837]: E1029 20:41:19.134430 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:19.229667 kubelet[2837]: E1029 20:41:19.229617 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:19.922377 systemd[1]: cri-containerd-44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64.scope: Deactivated successfully. Oct 29 20:41:19.922813 systemd[1]: cri-containerd-44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64.scope: Consumed 668ms CPU time, 179.8M memory peak, 2.9M read from disk, 171.3M written to disk. Oct 29 20:41:19.923714 containerd[1634]: time="2025-10-29T20:41:19.923683290Z" level=info msg="received exit event container_id:\"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\" id:\"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\" pid:3631 exited_at:{seconds:1761770479 nanos:923287207}" Oct 29 20:41:19.924326 containerd[1634]: time="2025-10-29T20:41:19.924290719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\" id:\"44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64\" pid:3631 exited_at:{seconds:1761770479 nanos:923287207}" Oct 29 20:41:19.950404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44ea367c322c9388b2256c25892ac3832d0c921729c59d242c1dcee13ed97d64-rootfs.mount: Deactivated successfully. Oct 29 20:41:19.990777 kubelet[2837]: I1029 20:41:19.990723 2837 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 20:41:20.202662 systemd[1]: Created slice kubepods-besteffort-pod6845a39b_fd17_44b6_8fca_090d62827c76.slice - libcontainer container kubepods-besteffort-pod6845a39b_fd17_44b6_8fca_090d62827c76.slice. Oct 29 20:41:20.212628 systemd[1]: Created slice kubepods-burstable-pode4d0c216_1667_4042_a5ea_053e211a3e91.slice - libcontainer container kubepods-burstable-pode4d0c216_1667_4042_a5ea_053e211a3e91.slice. Oct 29 20:41:20.222210 systemd[1]: Created slice kubepods-besteffort-poddeaef94c_fe21_40af_a068_2605dc363c66.slice - libcontainer container kubepods-besteffort-poddeaef94c_fe21_40af_a068_2605dc363c66.slice. Oct 29 20:41:20.229195 systemd[1]: Created slice kubepods-besteffort-podd50f9e75_9f55_47f4_8506_1ec4da0d8068.slice - libcontainer container kubepods-besteffort-podd50f9e75_9f55_47f4_8506_1ec4da0d8068.slice. Oct 29 20:41:20.238109 systemd[1]: Created slice kubepods-burstable-pod7ad68844_034b_4e62_b7d0_eacfdfd4f54d.slice - libcontainer container kubepods-burstable-pod7ad68844_034b_4e62_b7d0_eacfdfd4f54d.slice. Oct 29 20:41:20.240009 kubelet[2837]: E1029 20:41:20.239933 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:20.241603 containerd[1634]: time="2025-10-29T20:41:20.241508726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 20:41:20.247093 systemd[1]: Created slice kubepods-besteffort-pod4e297194_f7fa_4d00_92d1_15f2c61b7789.slice - libcontainer container kubepods-besteffort-pod4e297194_f7fa_4d00_92d1_15f2c61b7789.slice. Oct 29 20:41:20.253890 systemd[1]: Created slice kubepods-besteffort-podcbb5467a_783c_44e1_8c0d_4dca5f58eee2.slice - libcontainer container kubepods-besteffort-podcbb5467a_783c_44e1_8c0d_4dca5f58eee2.slice. Oct 29 20:41:20.262790 kubelet[2837]: I1029 20:41:20.262743 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d50f9e75-9f55-47f4-8506-1ec4da0d8068-config\") pod \"goldmane-666569f655-bqbs9\" (UID: \"d50f9e75-9f55-47f4-8506-1ec4da0d8068\") " pod="calico-system/goldmane-666569f655-bqbs9" Oct 29 20:41:20.262948 kubelet[2837]: I1029 20:41:20.262796 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4qz\" (UniqueName: \"kubernetes.io/projected/d50f9e75-9f55-47f4-8506-1ec4da0d8068-kube-api-access-7w4qz\") pod \"goldmane-666569f655-bqbs9\" (UID: \"d50f9e75-9f55-47f4-8506-1ec4da0d8068\") " pod="calico-system/goldmane-666569f655-bqbs9" Oct 29 20:41:20.262948 kubelet[2837]: I1029 20:41:20.262820 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb5467a-783c-44e1-8c0d-4dca5f58eee2-tigera-ca-bundle\") pod \"calico-kube-controllers-854f888fbc-vrqqc\" (UID: \"cbb5467a-783c-44e1-8c0d-4dca5f58eee2\") " pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" Oct 29 20:41:20.262948 kubelet[2837]: I1029 20:41:20.262836 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/deaef94c-fe21-40af-a068-2605dc363c66-calico-apiserver-certs\") pod \"calico-apiserver-8b8476c65-tgp87\" (UID: \"deaef94c-fe21-40af-a068-2605dc363c66\") " pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" Oct 29 20:41:20.262948 kubelet[2837]: I1029 20:41:20.262852 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/6845a39b-fd17-44b6-8fca-090d62827c76-kube-api-access-lgd5p\") pod \"whisker-575f945464-kjzp9\" (UID: \"6845a39b-fd17-44b6-8fca-090d62827c76\") " pod="calico-system/whisker-575f945464-kjzp9" Oct 29 20:41:20.262948 kubelet[2837]: I1029 20:41:20.262868 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4d0c216-1667-4042-a5ea-053e211a3e91-config-volume\") pod \"coredns-674b8bbfcf-wmmxw\" (UID: \"e4d0c216-1667-4042-a5ea-053e211a3e91\") " pod="kube-system/coredns-674b8bbfcf-wmmxw" Oct 29 20:41:20.263077 kubelet[2837]: I1029 20:41:20.262888 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-ca-bundle\") pod \"whisker-575f945464-kjzp9\" (UID: \"6845a39b-fd17-44b6-8fca-090d62827c76\") " pod="calico-system/whisker-575f945464-kjzp9" Oct 29 20:41:20.263077 kubelet[2837]: I1029 20:41:20.262906 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46lh6\" (UniqueName: \"kubernetes.io/projected/7ad68844-034b-4e62-b7d0-eacfdfd4f54d-kube-api-access-46lh6\") pod \"coredns-674b8bbfcf-t99tj\" (UID: \"7ad68844-034b-4e62-b7d0-eacfdfd4f54d\") " pod="kube-system/coredns-674b8bbfcf-t99tj" Oct 29 20:41:20.263077 kubelet[2837]: I1029 20:41:20.262927 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgmxv\" (UniqueName: \"kubernetes.io/projected/4e297194-f7fa-4d00-92d1-15f2c61b7789-kube-api-access-bgmxv\") pod \"calico-apiserver-8b8476c65-tgszf\" (UID: \"4e297194-f7fa-4d00-92d1-15f2c61b7789\") " pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" Oct 29 20:41:20.263077 kubelet[2837]: I1029 20:41:20.262942 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8n98\" (UniqueName: \"kubernetes.io/projected/e4d0c216-1667-4042-a5ea-053e211a3e91-kube-api-access-d8n98\") pod \"coredns-674b8bbfcf-wmmxw\" (UID: \"e4d0c216-1667-4042-a5ea-053e211a3e91\") " pod="kube-system/coredns-674b8bbfcf-wmmxw" Oct 29 20:41:20.263077 kubelet[2837]: I1029 20:41:20.262957 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d50f9e75-9f55-47f4-8506-1ec4da0d8068-goldmane-key-pair\") pod \"goldmane-666569f655-bqbs9\" (UID: \"d50f9e75-9f55-47f4-8506-1ec4da0d8068\") " pod="calico-system/goldmane-666569f655-bqbs9" Oct 29 20:41:20.263185 kubelet[2837]: I1029 20:41:20.262974 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps94f\" (UniqueName: \"kubernetes.io/projected/deaef94c-fe21-40af-a068-2605dc363c66-kube-api-access-ps94f\") pod \"calico-apiserver-8b8476c65-tgp87\" (UID: \"deaef94c-fe21-40af-a068-2605dc363c66\") " pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" Oct 29 20:41:20.263185 kubelet[2837]: I1029 20:41:20.262990 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789st\" (UniqueName: \"kubernetes.io/projected/cbb5467a-783c-44e1-8c0d-4dca5f58eee2-kube-api-access-789st\") pod \"calico-kube-controllers-854f888fbc-vrqqc\" (UID: \"cbb5467a-783c-44e1-8c0d-4dca5f58eee2\") " pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" Oct 29 20:41:20.263185 kubelet[2837]: I1029 20:41:20.263011 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d50f9e75-9f55-47f4-8506-1ec4da0d8068-goldmane-ca-bundle\") pod \"goldmane-666569f655-bqbs9\" (UID: \"d50f9e75-9f55-47f4-8506-1ec4da0d8068\") " pod="calico-system/goldmane-666569f655-bqbs9" Oct 29 20:41:20.263185 kubelet[2837]: I1029 20:41:20.263028 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-backend-key-pair\") pod \"whisker-575f945464-kjzp9\" (UID: \"6845a39b-fd17-44b6-8fca-090d62827c76\") " pod="calico-system/whisker-575f945464-kjzp9" Oct 29 20:41:20.263185 kubelet[2837]: I1029 20:41:20.263047 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e297194-f7fa-4d00-92d1-15f2c61b7789-calico-apiserver-certs\") pod \"calico-apiserver-8b8476c65-tgszf\" (UID: \"4e297194-f7fa-4d00-92d1-15f2c61b7789\") " pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" Oct 29 20:41:20.263306 kubelet[2837]: I1029 20:41:20.263064 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ad68844-034b-4e62-b7d0-eacfdfd4f54d-config-volume\") pod \"coredns-674b8bbfcf-t99tj\" (UID: \"7ad68844-034b-4e62-b7d0-eacfdfd4f54d\") " pod="kube-system/coredns-674b8bbfcf-t99tj" Oct 29 20:41:20.508734 containerd[1634]: time="2025-10-29T20:41:20.508598372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-575f945464-kjzp9,Uid:6845a39b-fd17-44b6-8fca-090d62827c76,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:20.517976 kubelet[2837]: E1029 20:41:20.517946 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:20.519610 containerd[1634]: time="2025-10-29T20:41:20.519561416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wmmxw,Uid:e4d0c216-1667-4042-a5ea-053e211a3e91,Namespace:kube-system,Attempt:0,}" Oct 29 20:41:20.526215 containerd[1634]: time="2025-10-29T20:41:20.526176314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgp87,Uid:deaef94c-fe21-40af-a068-2605dc363c66,Namespace:calico-apiserver,Attempt:0,}" Oct 29 20:41:20.536131 containerd[1634]: time="2025-10-29T20:41:20.536097895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bqbs9,Uid:d50f9e75-9f55-47f4-8506-1ec4da0d8068,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:20.542854 kubelet[2837]: E1029 20:41:20.542414 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:20.566610 containerd[1634]: time="2025-10-29T20:41:20.566564376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854f888fbc-vrqqc,Uid:cbb5467a-783c-44e1-8c0d-4dca5f58eee2,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:20.570979 containerd[1634]: time="2025-10-29T20:41:20.570937301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t99tj,Uid:7ad68844-034b-4e62-b7d0-eacfdfd4f54d,Namespace:kube-system,Attempt:0,}" Oct 29 20:41:20.572038 containerd[1634]: time="2025-10-29T20:41:20.571982602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgszf,Uid:4e297194-f7fa-4d00-92d1-15f2c61b7789,Namespace:calico-apiserver,Attempt:0,}" Oct 29 20:41:20.725218 containerd[1634]: time="2025-10-29T20:41:20.725144847Z" level=error msg="Failed to destroy network for sandbox \"ee26eb212c5a61170c8bfe89224ea58959074c01c160031af0a58e7c2f436fe0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.727686 containerd[1634]: time="2025-10-29T20:41:20.727565334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-575f945464-kjzp9,Uid:6845a39b-fd17-44b6-8fca-090d62827c76,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee26eb212c5a61170c8bfe89224ea58959074c01c160031af0a58e7c2f436fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.727950 kubelet[2837]: E1029 20:41:20.727900 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee26eb212c5a61170c8bfe89224ea58959074c01c160031af0a58e7c2f436fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.728035 kubelet[2837]: E1029 20:41:20.727990 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee26eb212c5a61170c8bfe89224ea58959074c01c160031af0a58e7c2f436fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-575f945464-kjzp9" Oct 29 20:41:20.728035 kubelet[2837]: E1029 20:41:20.728014 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee26eb212c5a61170c8bfe89224ea58959074c01c160031af0a58e7c2f436fe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-575f945464-kjzp9" Oct 29 20:41:20.728097 kubelet[2837]: E1029 20:41:20.728071 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-575f945464-kjzp9_calico-system(6845a39b-fd17-44b6-8fca-090d62827c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-575f945464-kjzp9_calico-system(6845a39b-fd17-44b6-8fca-090d62827c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee26eb212c5a61170c8bfe89224ea58959074c01c160031af0a58e7c2f436fe0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-575f945464-kjzp9" podUID="6845a39b-fd17-44b6-8fca-090d62827c76" Oct 29 20:41:20.729844 containerd[1634]: time="2025-10-29T20:41:20.729807358Z" level=error msg="Failed to destroy network for sandbox \"e11b51c91f4a0196f398c5c272b7ef8f79c621fe9ecf181df9f2a438693c076a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.735486 containerd[1634]: time="2025-10-29T20:41:20.735378357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgp87,Uid:deaef94c-fe21-40af-a068-2605dc363c66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11b51c91f4a0196f398c5c272b7ef8f79c621fe9ecf181df9f2a438693c076a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.736054 kubelet[2837]: E1029 20:41:20.735846 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11b51c91f4a0196f398c5c272b7ef8f79c621fe9ecf181df9f2a438693c076a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.736054 kubelet[2837]: E1029 20:41:20.735917 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11b51c91f4a0196f398c5c272b7ef8f79c621fe9ecf181df9f2a438693c076a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" Oct 29 20:41:20.736054 kubelet[2837]: E1029 20:41:20.735939 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11b51c91f4a0196f398c5c272b7ef8f79c621fe9ecf181df9f2a438693c076a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" Oct 29 20:41:20.736168 kubelet[2837]: E1029 20:41:20.735990 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8b8476c65-tgp87_calico-apiserver(deaef94c-fe21-40af-a068-2605dc363c66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8b8476c65-tgp87_calico-apiserver(deaef94c-fe21-40af-a068-2605dc363c66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e11b51c91f4a0196f398c5c272b7ef8f79c621fe9ecf181df9f2a438693c076a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:41:20.749406 containerd[1634]: time="2025-10-29T20:41:20.749342506Z" level=error msg="Failed to destroy network for sandbox \"c492388ef2362f2b56843a2cb9f71fea00215ba8330f1bea9755538e14bfd84c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.750837 containerd[1634]: time="2025-10-29T20:41:20.750756737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgszf,Uid:4e297194-f7fa-4d00-92d1-15f2c61b7789,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c492388ef2362f2b56843a2cb9f71fea00215ba8330f1bea9755538e14bfd84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.751300 kubelet[2837]: E1029 20:41:20.751232 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c492388ef2362f2b56843a2cb9f71fea00215ba8330f1bea9755538e14bfd84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.751377 kubelet[2837]: E1029 20:41:20.751354 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c492388ef2362f2b56843a2cb9f71fea00215ba8330f1bea9755538e14bfd84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" Oct 29 20:41:20.751422 kubelet[2837]: E1029 20:41:20.751382 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c492388ef2362f2b56843a2cb9f71fea00215ba8330f1bea9755538e14bfd84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" Oct 29 20:41:20.751485 kubelet[2837]: E1029 20:41:20.751441 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8b8476c65-tgszf_calico-apiserver(4e297194-f7fa-4d00-92d1-15f2c61b7789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8b8476c65-tgszf_calico-apiserver(4e297194-f7fa-4d00-92d1-15f2c61b7789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c492388ef2362f2b56843a2cb9f71fea00215ba8330f1bea9755538e14bfd84c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:41:20.753846 containerd[1634]: time="2025-10-29T20:41:20.753773622Z" level=error msg="Failed to destroy network for sandbox \"b9f096e9177756f950c7a3dfbfee49ea4c8db3ab2958525cc6d870894c569e39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.759725 containerd[1634]: time="2025-10-29T20:41:20.759614120Z" level=error msg="Failed to destroy network for sandbox \"8069eba83c6946fc596953fd24f5e1c8e38bdf963f64c0085a0a565abf4054f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.760353 containerd[1634]: time="2025-10-29T20:41:20.759828162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t99tj,Uid:7ad68844-034b-4e62-b7d0-eacfdfd4f54d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f096e9177756f950c7a3dfbfee49ea4c8db3ab2958525cc6d870894c569e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.760431 kubelet[2837]: E1029 20:41:20.760071 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f096e9177756f950c7a3dfbfee49ea4c8db3ab2958525cc6d870894c569e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.760431 kubelet[2837]: E1029 20:41:20.760134 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f096e9177756f950c7a3dfbfee49ea4c8db3ab2958525cc6d870894c569e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t99tj" Oct 29 20:41:20.760431 kubelet[2837]: E1029 20:41:20.760157 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f096e9177756f950c7a3dfbfee49ea4c8db3ab2958525cc6d870894c569e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t99tj" Oct 29 20:41:20.760588 kubelet[2837]: E1029 20:41:20.760211 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t99tj_kube-system(7ad68844-034b-4e62-b7d0-eacfdfd4f54d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t99tj_kube-system(7ad68844-034b-4e62-b7d0-eacfdfd4f54d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f096e9177756f950c7a3dfbfee49ea4c8db3ab2958525cc6d870894c569e39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t99tj" podUID="7ad68844-034b-4e62-b7d0-eacfdfd4f54d" Oct 29 20:41:20.761270 containerd[1634]: time="2025-10-29T20:41:20.761213467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854f888fbc-vrqqc,Uid:cbb5467a-783c-44e1-8c0d-4dca5f58eee2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069eba83c6946fc596953fd24f5e1c8e38bdf963f64c0085a0a565abf4054f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.761657 kubelet[2837]: E1029 20:41:20.761619 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069eba83c6946fc596953fd24f5e1c8e38bdf963f64c0085a0a565abf4054f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.761734 kubelet[2837]: E1029 20:41:20.761700 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069eba83c6946fc596953fd24f5e1c8e38bdf963f64c0085a0a565abf4054f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" Oct 29 20:41:20.761734 kubelet[2837]: E1029 20:41:20.761717 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069eba83c6946fc596953fd24f5e1c8e38bdf963f64c0085a0a565abf4054f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" Oct 29 20:41:20.761850 kubelet[2837]: E1029 20:41:20.761778 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-854f888fbc-vrqqc_calico-system(cbb5467a-783c-44e1-8c0d-4dca5f58eee2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-854f888fbc-vrqqc_calico-system(cbb5467a-783c-44e1-8c0d-4dca5f58eee2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8069eba83c6946fc596953fd24f5e1c8e38bdf963f64c0085a0a565abf4054f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:41:20.762433 containerd[1634]: time="2025-10-29T20:41:20.762404460Z" level=error msg="Failed to destroy network for sandbox \"e41897f16db1ba7fc92fb487456eb20b238cfe00faa038d6c9e39e17baebfe02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.763652 containerd[1634]: time="2025-10-29T20:41:20.763619206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bqbs9,Uid:d50f9e75-9f55-47f4-8506-1ec4da0d8068,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41897f16db1ba7fc92fb487456eb20b238cfe00faa038d6c9e39e17baebfe02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.763935 kubelet[2837]: E1029 20:41:20.763879 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41897f16db1ba7fc92fb487456eb20b238cfe00faa038d6c9e39e17baebfe02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.763984 kubelet[2837]: E1029 20:41:20.763965 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41897f16db1ba7fc92fb487456eb20b238cfe00faa038d6c9e39e17baebfe02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bqbs9" Oct 29 20:41:20.764138 kubelet[2837]: E1029 20:41:20.763994 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e41897f16db1ba7fc92fb487456eb20b238cfe00faa038d6c9e39e17baebfe02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bqbs9" Oct 29 20:41:20.764138 kubelet[2837]: E1029 20:41:20.764046 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bqbs9_calico-system(d50f9e75-9f55-47f4-8506-1ec4da0d8068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bqbs9_calico-system(d50f9e75-9f55-47f4-8506-1ec4da0d8068)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e41897f16db1ba7fc92fb487456eb20b238cfe00faa038d6c9e39e17baebfe02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:41:20.764823 containerd[1634]: time="2025-10-29T20:41:20.764766434Z" level=error msg="Failed to destroy network for sandbox \"19a01bec04f995da96d90723d0e3e54917d82ee1e5dff719d6b18a3c8d7887ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.766131 containerd[1634]: time="2025-10-29T20:41:20.766005649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wmmxw,Uid:e4d0c216-1667-4042-a5ea-053e211a3e91,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a01bec04f995da96d90723d0e3e54917d82ee1e5dff719d6b18a3c8d7887ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.766327 kubelet[2837]: E1029 20:41:20.766296 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a01bec04f995da96d90723d0e3e54917d82ee1e5dff719d6b18a3c8d7887ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:20.766380 kubelet[2837]: E1029 20:41:20.766349 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a01bec04f995da96d90723d0e3e54917d82ee1e5dff719d6b18a3c8d7887ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wmmxw" Oct 29 20:41:20.766380 kubelet[2837]: E1029 20:41:20.766368 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a01bec04f995da96d90723d0e3e54917d82ee1e5dff719d6b18a3c8d7887ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wmmxw" Oct 29 20:41:20.766430 kubelet[2837]: E1029 20:41:20.766412 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wmmxw_kube-system(e4d0c216-1667-4042-a5ea-053e211a3e91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wmmxw_kube-system(e4d0c216-1667-4042-a5ea-053e211a3e91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19a01bec04f995da96d90723d0e3e54917d82ee1e5dff719d6b18a3c8d7887ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wmmxw" podUID="e4d0c216-1667-4042-a5ea-053e211a3e91" Oct 29 20:41:21.139856 systemd[1]: Created slice kubepods-besteffort-pod9b38e512_971c_4a46_9cd5_db7a73bc8089.slice - libcontainer container kubepods-besteffort-pod9b38e512_971c_4a46_9cd5_db7a73bc8089.slice. Oct 29 20:41:21.142563 containerd[1634]: time="2025-10-29T20:41:21.142502528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hphmg,Uid:9b38e512-971c-4a46-9cd5-db7a73bc8089,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:21.198221 containerd[1634]: time="2025-10-29T20:41:21.198162834Z" level=error msg="Failed to destroy network for sandbox \"1dcea85a74cdd7b37da26ce8acf117290113c3fef49e2eb25a55d8d4a765af53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:21.199635 containerd[1634]: time="2025-10-29T20:41:21.199541115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hphmg,Uid:9b38e512-971c-4a46-9cd5-db7a73bc8089,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dcea85a74cdd7b37da26ce8acf117290113c3fef49e2eb25a55d8d4a765af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:21.199931 kubelet[2837]: E1029 20:41:21.199884 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dcea85a74cdd7b37da26ce8acf117290113c3fef49e2eb25a55d8d4a765af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 20:41:21.200003 kubelet[2837]: E1029 20:41:21.199969 2837 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dcea85a74cdd7b37da26ce8acf117290113c3fef49e2eb25a55d8d4a765af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:21.200028 kubelet[2837]: E1029 20:41:21.200001 2837 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dcea85a74cdd7b37da26ce8acf117290113c3fef49e2eb25a55d8d4a765af53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hphmg" Oct 29 20:41:21.200099 kubelet[2837]: E1029 20:41:21.200065 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dcea85a74cdd7b37da26ce8acf117290113c3fef49e2eb25a55d8d4a765af53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:21.200678 systemd[1]: run-netns-cni\x2db092b66a\x2ddd5a\x2dfc34\x2d8c8a\x2d38599a6e4996.mount: Deactivated successfully. Oct 29 20:41:22.427477 kubelet[2837]: I1029 20:41:22.427408 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 20:41:22.428082 kubelet[2837]: E1029 20:41:22.427960 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:23.245398 kubelet[2837]: E1029 20:41:23.245347 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:27.410633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683772511.mount: Deactivated successfully. Oct 29 20:41:31.029941 containerd[1634]: time="2025-10-29T20:41:31.029780154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:31.030959 containerd[1634]: time="2025-10-29T20:41:31.030909528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 20:41:31.032328 containerd[1634]: time="2025-10-29T20:41:31.032236692Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:31.034478 containerd[1634]: time="2025-10-29T20:41:31.034392464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 20:41:31.034717 containerd[1634]: time="2025-10-29T20:41:31.034643375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.793090495s" Oct 29 20:41:31.034717 containerd[1634]: time="2025-10-29T20:41:31.034680376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 20:41:31.051714 containerd[1634]: time="2025-10-29T20:41:31.051661120Z" level=info msg="CreateContainer within sandbox \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 20:41:31.068482 containerd[1634]: time="2025-10-29T20:41:31.066520600Z" level=info msg="Container 98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:41:31.078091 containerd[1634]: time="2025-10-29T20:41:31.078029397Z" level=info msg="CreateContainer within sandbox \"c166c96962dac8367919e200c70a478da20e0d51aa7d3dcaa14aa0dc0cbebee1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\"" Oct 29 20:41:31.080890 containerd[1634]: time="2025-10-29T20:41:31.078730501Z" level=info msg="StartContainer for \"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\"" Oct 29 20:41:31.080890 containerd[1634]: time="2025-10-29T20:41:31.080227039Z" level=info msg="connecting to shim 98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c" address="unix:///run/containerd/s/10a543e40149436cb441c8bae19650b31e45ff49712b6047250fe020f82a2b63" protocol=ttrpc version=3 Oct 29 20:41:31.113622 systemd[1]: Started cri-containerd-98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c.scope - libcontainer container 98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c. Oct 29 20:41:31.244735 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 20:41:31.245643 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 20:41:31.295806 containerd[1634]: time="2025-10-29T20:41:31.295678930Z" level=info msg="StartContainer for \"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\" returns successfully" Oct 29 20:41:31.940044 kubelet[2837]: I1029 20:41:31.939974 2837 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-backend-key-pair\") pod \"6845a39b-fd17-44b6-8fca-090d62827c76\" (UID: \"6845a39b-fd17-44b6-8fca-090d62827c76\") " Oct 29 20:41:31.940665 kubelet[2837]: I1029 20:41:31.940079 2837 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-ca-bundle\") pod \"6845a39b-fd17-44b6-8fca-090d62827c76\" (UID: \"6845a39b-fd17-44b6-8fca-090d62827c76\") " Oct 29 20:41:31.940665 kubelet[2837]: I1029 20:41:31.940114 2837 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/6845a39b-fd17-44b6-8fca-090d62827c76-kube-api-access-lgd5p\") pod \"6845a39b-fd17-44b6-8fca-090d62827c76\" (UID: \"6845a39b-fd17-44b6-8fca-090d62827c76\") " Oct 29 20:41:31.940965 kubelet[2837]: I1029 20:41:31.940918 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6845a39b-fd17-44b6-8fca-090d62827c76" (UID: "6845a39b-fd17-44b6-8fca-090d62827c76"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 20:41:31.944867 kubelet[2837]: I1029 20:41:31.944804 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6845a39b-fd17-44b6-8fca-090d62827c76" (UID: "6845a39b-fd17-44b6-8fca-090d62827c76"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 20:41:31.944867 kubelet[2837]: I1029 20:41:31.944811 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6845a39b-fd17-44b6-8fca-090d62827c76-kube-api-access-lgd5p" (OuterVolumeSpecName: "kube-api-access-lgd5p") pod "6845a39b-fd17-44b6-8fca-090d62827c76" (UID: "6845a39b-fd17-44b6-8fca-090d62827c76"). InnerVolumeSpecName "kube-api-access-lgd5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 20:41:32.041487 kubelet[2837]: I1029 20:41:32.041368 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 20:41:32.041487 kubelet[2837]: I1029 20:41:32.041426 2837 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/6845a39b-fd17-44b6-8fca-090d62827c76-kube-api-access-lgd5p\") on node \"localhost\" DevicePath \"\"" Oct 29 20:41:32.041487 kubelet[2837]: I1029 20:41:32.041472 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6845a39b-fd17-44b6-8fca-090d62827c76-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 20:41:32.044788 systemd[1]: var-lib-kubelet-pods-6845a39b\x2dfd17\x2d44b6\x2d8fca\x2d090d62827c76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgd5p.mount: Deactivated successfully. Oct 29 20:41:32.044910 systemd[1]: var-lib-kubelet-pods-6845a39b\x2dfd17\x2d44b6\x2d8fca\x2d090d62827c76-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 20:41:32.134203 kubelet[2837]: E1029 20:41:32.134160 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:32.134364 kubelet[2837]: E1029 20:41:32.134277 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:32.135029 containerd[1634]: time="2025-10-29T20:41:32.134744795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t99tj,Uid:7ad68844-034b-4e62-b7d0-eacfdfd4f54d,Namespace:kube-system,Attempt:0,}" Oct 29 20:41:32.135029 containerd[1634]: time="2025-10-29T20:41:32.134884983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgp87,Uid:deaef94c-fe21-40af-a068-2605dc363c66,Namespace:calico-apiserver,Attempt:0,}" Oct 29 20:41:32.135029 containerd[1634]: time="2025-10-29T20:41:32.134913769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wmmxw,Uid:e4d0c216-1667-4042-a5ea-053e211a3e91,Namespace:kube-system,Attempt:0,}" Oct 29 20:41:32.311553 kubelet[2837]: E1029 20:41:32.311424 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:32.317599 systemd[1]: Removed slice kubepods-besteffort-pod6845a39b_fd17_44b6_8fca_090d62827c76.slice - libcontainer container kubepods-besteffort-pod6845a39b_fd17_44b6_8fca_090d62827c76.slice. Oct 29 20:41:32.516566 containerd[1634]: time="2025-10-29T20:41:32.516515234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\" id:\"370b58e2f7939539dc82f4bdcf449de733d2163b3584fe2529d78b72373c926a\" pid:4051 exit_status:1 exited_at:{seconds:1761770492 nanos:516037698}" Oct 29 20:41:32.639311 kubelet[2837]: I1029 20:41:32.638990 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fnl8t" podStartSLOduration=3.409507563 podStartE2EDuration="24.63897388s" podCreationTimestamp="2025-10-29 20:41:08 +0000 UTC" firstStartedPulling="2025-10-29 20:41:09.806465958 +0000 UTC m=+22.765216666" lastFinishedPulling="2025-10-29 20:41:31.035932285 +0000 UTC m=+43.994682983" observedRunningTime="2025-10-29 20:41:32.63680305 +0000 UTC m=+45.595553768" watchObservedRunningTime="2025-10-29 20:41:32.63897388 +0000 UTC m=+45.597724578" Oct 29 20:41:33.310954 kubelet[2837]: E1029 20:41:33.310876 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:33.402775 containerd[1634]: time="2025-10-29T20:41:33.402720339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\" id:\"7251595ea889c6f9c718d946931b3a6fb8e3050b65261277974209c3a0199737\" pid:4113 exit_status:1 exited_at:{seconds:1761770493 nanos:402362443}" Oct 29 20:41:33.798891 systemd-networkd[1536]: cali73798279673: Link UP Oct 29 20:41:33.803361 systemd-networkd[1536]: cali73798279673: Gained carrier Oct 29 20:41:33.848704 containerd[1634]: 2025-10-29 20:41:32.442 [INFO][4026] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 20:41:33.848704 containerd[1634]: 2025-10-29 20:41:32.641 [INFO][4026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0 coredns-674b8bbfcf- kube-system e4d0c216-1667-4042-a5ea-053e211a3e91 847 0 2025-10-29 20:40:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wmmxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73798279673 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-" Oct 29 20:41:33.848704 containerd[1634]: 2025-10-29 20:41:32.641 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.848704 containerd[1634]: 2025-10-29 20:41:32.691 [INFO][4081] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" HandleID="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Workload="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:32.692 [INFO][4081] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" HandleID="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Workload="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wmmxw", "timestamp":"2025-10-29 20:41:32.691828906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:32.692 [INFO][4081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:32.692 [INFO][4081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:32.693 [INFO][4081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:32.826 [INFO][4081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" host="localhost" Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:32.833 [INFO][4081] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:33.323 [INFO][4081] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:33.565 [INFO][4081] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:33.672 [INFO][4081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:33.849009 containerd[1634]: 2025-10-29 20:41:33.672 [INFO][4081] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" host="localhost" Oct 29 20:41:33.849236 containerd[1634]: 2025-10-29 20:41:33.675 [INFO][4081] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34 Oct 29 20:41:33.849236 containerd[1634]: 2025-10-29 20:41:33.719 [INFO][4081] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" host="localhost" Oct 29 20:41:33.849236 containerd[1634]: 2025-10-29 20:41:33.764 [INFO][4081] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" host="localhost" Oct 29 20:41:33.849236 containerd[1634]: 2025-10-29 20:41:33.764 [INFO][4081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" host="localhost" Oct 29 20:41:33.849236 containerd[1634]: 2025-10-29 20:41:33.765 [INFO][4081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:33.849236 containerd[1634]: 2025-10-29 20:41:33.765 [INFO][4081] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" HandleID="k8s-pod-network.14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Workload="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.849359 containerd[1634]: 2025-10-29 20:41:33.776 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e4d0c216-1667-4042-a5ea-053e211a3e91", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wmmxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73798279673", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:33.851428 containerd[1634]: 2025-10-29 20:41:33.777 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.851428 containerd[1634]: 2025-10-29 20:41:33.777 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73798279673 ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.851428 containerd[1634]: 2025-10-29 20:41:33.805 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.851556 containerd[1634]: 2025-10-29 20:41:33.806 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e4d0c216-1667-4042-a5ea-053e211a3e91", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34", Pod:"coredns-674b8bbfcf-wmmxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73798279673", MAC:"0a:c5:4a:ed:db:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:33.851556 containerd[1634]: 2025-10-29 20:41:33.842 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" Namespace="kube-system" Pod="coredns-674b8bbfcf-wmmxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wmmxw-eth0" Oct 29 20:41:33.941428 systemd[1]: Created slice kubepods-besteffort-pode137508a_f04b_4511_9c9c_1f63f8deba2a.slice - libcontainer container kubepods-besteffort-pode137508a_f04b_4511_9c9c_1f63f8deba2a.slice. Oct 29 20:41:33.988713 systemd-networkd[1536]: cali06d7acee0de: Link UP Oct 29 20:41:33.992563 systemd-networkd[1536]: cali06d7acee0de: Gained carrier Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:32.308 [INFO][4013] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:32.434 [INFO][4013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0 calico-apiserver-8b8476c65- calico-apiserver deaef94c-fe21-40af-a068-2605dc363c66 844 0 2025-10-29 20:41:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8b8476c65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8b8476c65-tgp87 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06d7acee0de [] [] }} ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:32.435 [INFO][4013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:32.691 [INFO][4079] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" HandleID="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Workload="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:32.692 [INFO][4079] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" HandleID="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Workload="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031f450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8b8476c65-tgp87", "timestamp":"2025-10-29 20:41:32.691835067 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:32.693 [INFO][4079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.765 [INFO][4079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.765 [INFO][4079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.781 [INFO][4079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.800 [INFO][4079] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.846 [INFO][4079] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.850 [INFO][4079] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.854 [INFO][4079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.854 [INFO][4079] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.857 [INFO][4079] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941 Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.938 [INFO][4079] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.960 [INFO][4079] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.960 [INFO][4079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" host="localhost" Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.960 [INFO][4079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:34.028823 containerd[1634]: 2025-10-29 20:41:33.960 [INFO][4079] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" HandleID="k8s-pod-network.aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Workload="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.030886 containerd[1634]: 2025-10-29 20:41:33.973 [INFO][4013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0", GenerateName:"calico-apiserver-8b8476c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"deaef94c-fe21-40af-a068-2605dc363c66", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b8476c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8b8476c65-tgp87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06d7acee0de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.030886 containerd[1634]: 2025-10-29 20:41:33.976 [INFO][4013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.030886 containerd[1634]: 2025-10-29 20:41:33.977 [INFO][4013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06d7acee0de ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.030886 containerd[1634]: 2025-10-29 20:41:33.993 [INFO][4013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.030886 containerd[1634]: 2025-10-29 20:41:33.994 [INFO][4013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0", GenerateName:"calico-apiserver-8b8476c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"deaef94c-fe21-40af-a068-2605dc363c66", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b8476c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941", Pod:"calico-apiserver-8b8476c65-tgp87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06d7acee0de", MAC:"66:f6:16:fd:a4:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.030886 containerd[1634]: 2025-10-29 20:41:34.012 [INFO][4013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgp87" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgp87-eth0" Oct 29 20:41:34.056947 kubelet[2837]: I1029 20:41:34.056797 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e137508a-f04b-4511-9c9c-1f63f8deba2a-whisker-ca-bundle\") pod \"whisker-6747bb86c8-ccg9x\" (UID: \"e137508a-f04b-4511-9c9c-1f63f8deba2a\") " pod="calico-system/whisker-6747bb86c8-ccg9x" Oct 29 20:41:34.056947 kubelet[2837]: I1029 20:41:34.056860 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpd9\" (UniqueName: \"kubernetes.io/projected/e137508a-f04b-4511-9c9c-1f63f8deba2a-kube-api-access-nkpd9\") pod \"whisker-6747bb86c8-ccg9x\" (UID: \"e137508a-f04b-4511-9c9c-1f63f8deba2a\") " pod="calico-system/whisker-6747bb86c8-ccg9x" Oct 29 20:41:34.056947 kubelet[2837]: I1029 20:41:34.056891 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e137508a-f04b-4511-9c9c-1f63f8deba2a-whisker-backend-key-pair\") pod \"whisker-6747bb86c8-ccg9x\" (UID: \"e137508a-f04b-4511-9c9c-1f63f8deba2a\") " pod="calico-system/whisker-6747bb86c8-ccg9x" Oct 29 20:41:34.075491 containerd[1634]: time="2025-10-29T20:41:34.074801281Z" level=info msg="connecting to shim 14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34" address="unix:///run/containerd/s/310a45afb806c1f04e027b9cd2d937d7683b62902ba629c3501d0517ea5e2b6e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:34.089303 containerd[1634]: time="2025-10-29T20:41:34.089252549Z" level=info msg="connecting to shim aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941" address="unix:///run/containerd/s/f855449c8816cf186411500b98f200f5112ebc9624625841cc7d1ff8513da9fa" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:34.128942 systemd-networkd[1536]: cali6af0a8c7503: Link UP Oct 29 20:41:34.129183 systemd-networkd[1536]: cali6af0a8c7503: Gained carrier Oct 29 20:41:34.129749 systemd[1]: Started cri-containerd-14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34.scope - libcontainer container 14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34. Oct 29 20:41:34.134372 systemd[1]: Started cri-containerd-aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941.scope - libcontainer container aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941. Oct 29 20:41:34.146883 containerd[1634]: time="2025-10-29T20:41:34.146816040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bqbs9,Uid:d50f9e75-9f55-47f4-8506-1ec4da0d8068,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:34.147861 containerd[1634]: time="2025-10-29T20:41:34.147825152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgszf,Uid:4e297194-f7fa-4d00-92d1-15f2c61b7789,Namespace:calico-apiserver,Attempt:0,}" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:32.262 [INFO][4000] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:32.424 [INFO][4000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--t99tj-eth0 coredns-674b8bbfcf- kube-system 7ad68844-034b-4e62-b7d0-eacfdfd4f54d 850 0 2025-10-29 20:40:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-t99tj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6af0a8c7503 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:32.426 [INFO][4000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:32.692 [INFO][4077] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" HandleID="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Workload="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:32.694 [INFO][4077] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" HandleID="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Workload="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-t99tj", "timestamp":"2025-10-29 20:41:32.692011366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:32.694 [INFO][4077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:33.960 [INFO][4077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:33.960 [INFO][4077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:33.975 [INFO][4077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.048 [INFO][4077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.072 [INFO][4077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.079 [INFO][4077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.084 [INFO][4077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.084 [INFO][4077] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.087 [INFO][4077] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4 Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.102 [INFO][4077] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.116 [INFO][4077] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.116 [INFO][4077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" host="localhost" Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.116 [INFO][4077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:34.167764 containerd[1634]: 2025-10-29 20:41:34.116 [INFO][4077] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" HandleID="k8s-pod-network.330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Workload="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.168353 containerd[1634]: 2025-10-29 20:41:34.121 [INFO][4000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t99tj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ad68844-034b-4e62-b7d0-eacfdfd4f54d", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-t99tj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6af0a8c7503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.168353 containerd[1634]: 2025-10-29 20:41:34.122 [INFO][4000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.168353 containerd[1634]: 2025-10-29 20:41:34.126 [INFO][4000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6af0a8c7503 ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.168353 containerd[1634]: 2025-10-29 20:41:34.129 [INFO][4000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.168353 containerd[1634]: 2025-10-29 20:41:34.134 [INFO][4000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t99tj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ad68844-034b-4e62-b7d0-eacfdfd4f54d", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4", Pod:"coredns-674b8bbfcf-t99tj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6af0a8c7503", MAC:"ba:ca:a2:1a:2b:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.168353 containerd[1634]: 2025-10-29 20:41:34.162 [INFO][4000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" Namespace="kube-system" Pod="coredns-674b8bbfcf-t99tj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t99tj-eth0" Oct 29 20:41:34.173753 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:34.198335 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:34.203574 containerd[1634]: time="2025-10-29T20:41:34.203509915Z" level=info msg="connecting to shim 330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4" address="unix:///run/containerd/s/05893cc4fd4316a10786dbeb9cff2cfa52fac295d9c494721dfbe22a213e09af" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:34.243504 containerd[1634]: time="2025-10-29T20:41:34.242293487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wmmxw,Uid:e4d0c216-1667-4042-a5ea-053e211a3e91,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34\"" Oct 29 20:41:34.247300 kubelet[2837]: E1029 20:41:34.247254 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:34.254337 containerd[1634]: time="2025-10-29T20:41:34.254295314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6747bb86c8-ccg9x,Uid:e137508a-f04b-4511-9c9c-1f63f8deba2a,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:34.255901 systemd[1]: Started cri-containerd-330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4.scope - libcontainer container 330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4. Oct 29 20:41:34.256439 containerd[1634]: time="2025-10-29T20:41:34.256203189Z" level=info msg="CreateContainer within sandbox \"14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 20:41:34.267314 containerd[1634]: time="2025-10-29T20:41:34.267266347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgp87,Uid:deaef94c-fe21-40af-a068-2605dc363c66,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aaea7189687df795e21c644e163963d8595934176b798d9a9393d003243c7941\"" Oct 29 20:41:34.271563 containerd[1634]: time="2025-10-29T20:41:34.271508472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 20:41:34.293242 containerd[1634]: time="2025-10-29T20:41:34.293191491Z" level=info msg="Container 5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:41:34.299736 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:34.306296 containerd[1634]: time="2025-10-29T20:41:34.304588099Z" level=info msg="CreateContainer within sandbox \"14e52c6f10c93134695981ebb092d37b2dc42bda5466c0b7fc5fb851e3b3bc34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b\"" Oct 29 20:41:34.311200 containerd[1634]: time="2025-10-29T20:41:34.311042872Z" level=info msg="StartContainer for \"5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b\"" Oct 29 20:41:34.312023 containerd[1634]: time="2025-10-29T20:41:34.311921655Z" level=info msg="connecting to shim 5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b" address="unix:///run/containerd/s/310a45afb806c1f04e027b9cd2d937d7683b62902ba629c3501d0517ea5e2b6e" protocol=ttrpc version=3 Oct 29 20:41:34.341729 systemd[1]: Started cri-containerd-5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b.scope - libcontainer container 5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b. Oct 29 20:41:34.398638 containerd[1634]: time="2025-10-29T20:41:34.398574862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t99tj,Uid:7ad68844-034b-4e62-b7d0-eacfdfd4f54d,Namespace:kube-system,Attempt:0,} returns sandbox id \"330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4\"" Oct 29 20:41:34.401710 kubelet[2837]: E1029 20:41:34.401676 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:34.408222 containerd[1634]: time="2025-10-29T20:41:34.408176682Z" level=info msg="CreateContainer within sandbox \"330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 20:41:34.416957 containerd[1634]: time="2025-10-29T20:41:34.416795249Z" level=info msg="StartContainer for \"5322b34bf930f9cabacc6f3a9cb040afd3d305a77bdaeb40109288abc0b1d18b\" returns successfully" Oct 29 20:41:34.624969 containerd[1634]: time="2025-10-29T20:41:34.623288143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:34.658658 systemd-networkd[1536]: vxlan.calico: Link UP Oct 29 20:41:34.658672 systemd-networkd[1536]: vxlan.calico: Gained carrier Oct 29 20:41:34.676960 containerd[1634]: time="2025-10-29T20:41:34.675688244Z" level=info msg="Container 9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b: CDI devices from CRI Config.CDIDevices: []" Oct 29 20:41:34.678288 containerd[1634]: time="2025-10-29T20:41:34.678246064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 20:41:34.687973 containerd[1634]: time="2025-10-29T20:41:34.687926665Z" level=info msg="CreateContainer within sandbox \"330c14e06ffa37f3b376c2d89a278c5140b3538421ed8ea3570c637519406ae4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b\"" Oct 29 20:41:34.688407 containerd[1634]: time="2025-10-29T20:41:34.688237110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 20:41:34.690369 kubelet[2837]: E1029 20:41:34.689295 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:34.690369 kubelet[2837]: E1029 20:41:34.690048 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:34.691118 kubelet[2837]: E1029 20:41:34.690785 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps94f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b8476c65-tgp87_calico-apiserver(deaef94c-fe21-40af-a068-2605dc363c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:34.693461 kubelet[2837]: E1029 20:41:34.693388 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:41:34.695147 containerd[1634]: time="2025-10-29T20:41:34.694760324Z" level=info msg="StartContainer for \"9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b\"" Oct 29 20:41:34.695934 containerd[1634]: time="2025-10-29T20:41:34.695760970Z" level=info msg="connecting to shim 9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b" address="unix:///run/containerd/s/05893cc4fd4316a10786dbeb9cff2cfa52fac295d9c494721dfbe22a213e09af" protocol=ttrpc version=3 Oct 29 20:41:34.708929 systemd-networkd[1536]: cali9eae99f7ff1: Link UP Oct 29 20:41:34.710745 systemd-networkd[1536]: cali9eae99f7ff1: Gained carrier Oct 29 20:41:34.724806 systemd[1]: Started cri-containerd-9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b.scope - libcontainer container 9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b. Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.304 [INFO][4357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--bqbs9-eth0 goldmane-666569f655- calico-system d50f9e75-9f55-47f4-8506-1ec4da0d8068 851 0 2025-10-29 20:41:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-bqbs9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9eae99f7ff1 [] [] }} ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.304 [INFO][4357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.373 [INFO][4459] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" HandleID="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Workload="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.375 [INFO][4459] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" HandleID="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Workload="localhost-k8s-goldmane--666569f655--bqbs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-bqbs9", "timestamp":"2025-10-29 20:41:34.373755916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.375 [INFO][4459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.375 [INFO][4459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.375 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.385 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.407 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.418 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.425 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.429 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.429 [INFO][4459] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.430 [INFO][4459] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748 Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.494 [INFO][4459] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.671 [INFO][4459] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.671 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" host="localhost" Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.672 [INFO][4459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:34.745237 containerd[1634]: 2025-10-29 20:41:34.672 [INFO][4459] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" HandleID="k8s-pod-network.768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Workload="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.746050 containerd[1634]: 2025-10-29 20:41:34.679 [INFO][4357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bqbs9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d50f9e75-9f55-47f4-8506-1ec4da0d8068", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-bqbs9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9eae99f7ff1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.746050 containerd[1634]: 2025-10-29 20:41:34.680 [INFO][4357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.746050 containerd[1634]: 2025-10-29 20:41:34.680 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9eae99f7ff1 ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.746050 containerd[1634]: 2025-10-29 20:41:34.710 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.746050 containerd[1634]: 2025-10-29 20:41:34.712 [INFO][4357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bqbs9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d50f9e75-9f55-47f4-8506-1ec4da0d8068", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748", Pod:"goldmane-666569f655-bqbs9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9eae99f7ff1", MAC:"86:3c:e1:05:5d:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.746050 containerd[1634]: 2025-10-29 20:41:34.730 [INFO][4357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" Namespace="calico-system" Pod="goldmane-666569f655-bqbs9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bqbs9-eth0" Oct 29 20:41:34.773842 systemd-networkd[1536]: calif6550db751a: Link UP Oct 29 20:41:34.775528 systemd-networkd[1536]: calif6550db751a: Gained carrier Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.298 [INFO][4358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0 calico-apiserver-8b8476c65- calico-apiserver 4e297194-f7fa-4d00-92d1-15f2c61b7789 852 0 2025-10-29 20:41:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8b8476c65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8b8476c65-tgszf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6550db751a [] [] }} ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.299 [INFO][4358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.404 [INFO][4457] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" HandleID="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Workload="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.405 [INFO][4457] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" HandleID="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Workload="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8b8476c65-tgszf", "timestamp":"2025-10-29 20:41:34.404786739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.405 [INFO][4457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.672 [INFO][4457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.673 [INFO][4457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.688 [INFO][4457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.701 [INFO][4457] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.708 [INFO][4457] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.711 [INFO][4457] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.727 [INFO][4457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.727 [INFO][4457] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.735 [INFO][4457] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95 Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.749 [INFO][4457] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.760 [INFO][4457] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.760 [INFO][4457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" host="localhost" Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.760 [INFO][4457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:34.797248 containerd[1634]: 2025-10-29 20:41:34.761 [INFO][4457] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" HandleID="k8s-pod-network.df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Workload="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.797945 containerd[1634]: 2025-10-29 20:41:34.767 [INFO][4358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0", GenerateName:"calico-apiserver-8b8476c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e297194-f7fa-4d00-92d1-15f2c61b7789", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b8476c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8b8476c65-tgszf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6550db751a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.797945 containerd[1634]: 2025-10-29 20:41:34.767 [INFO][4358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.797945 containerd[1634]: 2025-10-29 20:41:34.767 [INFO][4358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6550db751a ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.797945 containerd[1634]: 2025-10-29 20:41:34.776 [INFO][4358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.797945 containerd[1634]: 2025-10-29 20:41:34.776 [INFO][4358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0", GenerateName:"calico-apiserver-8b8476c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e297194-f7fa-4d00-92d1-15f2c61b7789", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b8476c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95", Pod:"calico-apiserver-8b8476c65-tgszf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6550db751a", MAC:"aa:b4:bc:a2:b4:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.797945 containerd[1634]: 2025-10-29 20:41:34.788 [INFO][4358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" Namespace="calico-apiserver" Pod="calico-apiserver-8b8476c65-tgszf" WorkloadEndpoint="localhost-k8s-calico--apiserver--8b8476c65--tgszf-eth0" Oct 29 20:41:34.804942 containerd[1634]: time="2025-10-29T20:41:34.804778360Z" level=info msg="connecting to shim 768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748" address="unix:///run/containerd/s/f22ff607d8124b03e2a46a16760d473d322e38227dcbfc4594a2055ce22683ea" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:34.816284 containerd[1634]: time="2025-10-29T20:41:34.815423027Z" level=info msg="StartContainer for \"9c24c04cf47f0d89e86a034e01acf12783d2e46b7ec3635122b7a9641768304b\" returns successfully" Oct 29 20:41:34.860792 systemd-networkd[1536]: cali3d146bd8746: Link UP Oct 29 20:41:34.861781 systemd[1]: Started cri-containerd-768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748.scope - libcontainer container 768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748. Oct 29 20:41:34.865777 systemd-networkd[1536]: cali3d146bd8746: Gained carrier Oct 29 20:41:34.867086 containerd[1634]: time="2025-10-29T20:41:34.866655332Z" level=info msg="connecting to shim df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95" address="unix:///run/containerd/s/d9225ace892ead1c17bafac4d108b218fcc454df92d5be236a6698aed5ece623" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:34.894841 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.365 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6747bb86c8--ccg9x-eth0 whisker-6747bb86c8- calico-system e137508a-f04b-4511-9c9c-1f63f8deba2a 944 0 2025-10-29 20:41:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6747bb86c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6747bb86c8-ccg9x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3d146bd8746 [] [] }} ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.366 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.427 [INFO][4498] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" HandleID="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Workload="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.428 [INFO][4498] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" HandleID="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Workload="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139900), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6747bb86c8-ccg9x", "timestamp":"2025-10-29 20:41:34.427849411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.428 [INFO][4498] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.760 [INFO][4498] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.761 [INFO][4498] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.792 [INFO][4498] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.800 [INFO][4498] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.810 [INFO][4498] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.814 [INFO][4498] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.822 [INFO][4498] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.822 [INFO][4498] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.825 [INFO][4498] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2 Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.833 [INFO][4498] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.841 [INFO][4498] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.842 [INFO][4498] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" host="localhost" Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.842 [INFO][4498] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:34.907426 containerd[1634]: 2025-10-29 20:41:34.842 [INFO][4498] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" HandleID="k8s-pod-network.2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Workload="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.907955 containerd[1634]: 2025-10-29 20:41:34.849 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6747bb86c8--ccg9x-eth0", GenerateName:"whisker-6747bb86c8-", Namespace:"calico-system", SelfLink:"", UID:"e137508a-f04b-4511-9c9c-1f63f8deba2a", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6747bb86c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6747bb86c8-ccg9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d146bd8746", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.907955 containerd[1634]: 2025-10-29 20:41:34.851 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.907955 containerd[1634]: 2025-10-29 20:41:34.851 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d146bd8746 ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.907955 containerd[1634]: 2025-10-29 20:41:34.872 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.907955 containerd[1634]: 2025-10-29 20:41:34.873 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6747bb86c8--ccg9x-eth0", GenerateName:"whisker-6747bb86c8-", Namespace:"calico-system", SelfLink:"", UID:"e137508a-f04b-4511-9c9c-1f63f8deba2a", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6747bb86c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2", Pod:"whisker-6747bb86c8-ccg9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d146bd8746", MAC:"02:6b:f4:45:69:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:34.907955 containerd[1634]: 2025-10-29 20:41:34.894 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" Namespace="calico-system" Pod="whisker-6747bb86c8-ccg9x" WorkloadEndpoint="localhost-k8s-whisker--6747bb86c8--ccg9x-eth0" Oct 29 20:41:34.912981 systemd[1]: Started cri-containerd-df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95.scope - libcontainer container df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95. Oct 29 20:41:34.943760 containerd[1634]: time="2025-10-29T20:41:34.943698523Z" level=info msg="connecting to shim 2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2" address="unix:///run/containerd/s/8fecc6ed00121dd69db42ba66747c3ab5bc3d23d08435e8980faa289fa78e345" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:34.960623 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:34.961415 containerd[1634]: time="2025-10-29T20:41:34.961351203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bqbs9,Uid:d50f9e75-9f55-47f4-8506-1ec4da0d8068,Namespace:calico-system,Attempt:0,} returns sandbox id \"768287b34c7a591cde7e2782c56e2c6f1ba7cf9ff35819985d88cd8a7eebc748\"" Oct 29 20:41:34.967649 containerd[1634]: time="2025-10-29T20:41:34.967539365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 20:41:35.011873 systemd[1]: Started cri-containerd-2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2.scope - libcontainer container 2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2. Oct 29 20:41:35.048882 containerd[1634]: time="2025-10-29T20:41:35.048836134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b8476c65-tgszf,Uid:4e297194-f7fa-4d00-92d1-15f2c61b7789,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df16ce4474a36394f2bafe514f45672793c4a2344840bc1c0d8af5b31047ca95\"" Oct 29 20:41:35.064655 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:35.120422 containerd[1634]: time="2025-10-29T20:41:35.120282438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6747bb86c8-ccg9x,Uid:e137508a-f04b-4511-9c9c-1f63f8deba2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c25a368abba64fca76c2a9bdd16eb04f440c82db1e017594748cc216b0edac2\"" Oct 29 20:41:35.136500 containerd[1634]: time="2025-10-29T20:41:35.135701193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hphmg,Uid:9b38e512-971c-4a46-9cd5-db7a73bc8089,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:35.136500 containerd[1634]: time="2025-10-29T20:41:35.135750868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854f888fbc-vrqqc,Uid:cbb5467a-783c-44e1-8c0d-4dca5f58eee2,Namespace:calico-system,Attempt:0,}" Oct 29 20:41:35.138869 kubelet[2837]: I1029 20:41:35.138731 2837 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6845a39b-fd17-44b6-8fca-090d62827c76" path="/var/lib/kubelet/pods/6845a39b-fd17-44b6-8fca-090d62827c76/volumes" Oct 29 20:41:35.298160 systemd-networkd[1536]: cali23db52d0235: Link UP Oct 29 20:41:35.300996 systemd-networkd[1536]: cali23db52d0235: Gained carrier Oct 29 20:41:35.306532 containerd[1634]: time="2025-10-29T20:41:35.306498475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:35.308497 containerd[1634]: time="2025-10-29T20:41:35.308371652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 20:41:35.308497 containerd[1634]: time="2025-10-29T20:41:35.308498536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 20:41:35.308833 kubelet[2837]: E1029 20:41:35.308782 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 20:41:35.308879 kubelet[2837]: E1029 20:41:35.308845 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 20:41:35.309224 kubelet[2837]: E1029 20:41:35.309160 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w4qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bqbs9_calico-system(d50f9e75-9f55-47f4-8506-1ec4da0d8068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:35.309355 containerd[1634]: time="2025-10-29T20:41:35.309319608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 20:41:35.310516 kubelet[2837]: E1029 20:41:35.310472 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.210 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0 calico-kube-controllers-854f888fbc- calico-system cbb5467a-783c-44e1-8c0d-4dca5f58eee2 846 0 2025-10-29 20:41:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:854f888fbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-854f888fbc-vrqqc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali23db52d0235 [] [] }} ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.211 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.253 [INFO][4830] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" HandleID="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Workload="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.253 [INFO][4830] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" HandleID="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Workload="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-854f888fbc-vrqqc", "timestamp":"2025-10-29 20:41:35.25338646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.253 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.253 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.253 [INFO][4830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.260 [INFO][4830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.265 [INFO][4830] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.271 [INFO][4830] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.273 [INFO][4830] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.276 [INFO][4830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.276 [INFO][4830] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.278 [INFO][4830] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184 Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.282 [INFO][4830] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.291 [INFO][4830] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.291 [INFO][4830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" host="localhost" Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.292 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:35.317896 containerd[1634]: 2025-10-29 20:41:35.292 [INFO][4830] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" HandleID="k8s-pod-network.dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Workload="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.319636 containerd[1634]: 2025-10-29 20:41:35.295 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0", GenerateName:"calico-kube-controllers-854f888fbc-", Namespace:"calico-system", SelfLink:"", UID:"cbb5467a-783c-44e1-8c0d-4dca5f58eee2", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854f888fbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-854f888fbc-vrqqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23db52d0235", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:35.319636 containerd[1634]: 2025-10-29 20:41:35.295 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.319636 containerd[1634]: 2025-10-29 20:41:35.295 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23db52d0235 ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.319636 containerd[1634]: 2025-10-29 20:41:35.302 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.319636 containerd[1634]: 2025-10-29 20:41:35.302 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0", GenerateName:"calico-kube-controllers-854f888fbc-", Namespace:"calico-system", SelfLink:"", UID:"cbb5467a-783c-44e1-8c0d-4dca5f58eee2", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854f888fbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184", Pod:"calico-kube-controllers-854f888fbc-vrqqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23db52d0235", MAC:"12:cb:fe:c3:05:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:35.319636 containerd[1634]: 2025-10-29 20:41:35.314 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" Namespace="calico-system" Pod="calico-kube-controllers-854f888fbc-vrqqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--854f888fbc--vrqqc-eth0" Oct 29 20:41:35.334229 kubelet[2837]: E1029 20:41:35.334168 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:41:35.338391 kubelet[2837]: E1029 20:41:35.338344 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:35.342792 kubelet[2837]: E1029 20:41:35.342750 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:35.348154 containerd[1634]: time="2025-10-29T20:41:35.347633465Z" level=info msg="connecting to shim dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184" address="unix:///run/containerd/s/2f95feda2ccb6a55475db3336dc98b5d1bb311b788ff6b8f20887d23ae4bc302" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:35.348379 kubelet[2837]: E1029 20:41:35.348330 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:41:35.372871 kubelet[2837]: I1029 20:41:35.372797 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wmmxw" podStartSLOduration=41.37277523 podStartE2EDuration="41.37277523s" podCreationTimestamp="2025-10-29 20:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 20:41:35.370885171 +0000 UTC m=+48.329635879" watchObservedRunningTime="2025-10-29 20:41:35.37277523 +0000 UTC m=+48.331525948" Oct 29 20:41:35.394725 systemd[1]: Started cri-containerd-dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184.scope - libcontainer container dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184. Oct 29 20:41:35.402566 systemd-networkd[1536]: cali06d7acee0de: Gained IPv6LL Oct 29 20:41:35.409228 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:35.534748 containerd[1634]: time="2025-10-29T20:41:35.534688761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854f888fbc-vrqqc,Uid:cbb5467a-783c-44e1-8c0d-4dca5f58eee2,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc2dc29a6f76d3d11e72c79b4f4c872a57274e25a77fda6fa86fef2c613f1184\"" Oct 29 20:41:35.656634 systemd-networkd[1536]: cali73798279673: Gained IPv6LL Oct 29 20:41:35.716869 containerd[1634]: time="2025-10-29T20:41:35.716796492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:35.840294 containerd[1634]: time="2025-10-29T20:41:35.840226460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 20:41:35.840420 containerd[1634]: time="2025-10-29T20:41:35.840268811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 20:41:35.840534 kubelet[2837]: E1029 20:41:35.840498 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:35.840917 kubelet[2837]: E1029 20:41:35.840547 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:35.840917 kubelet[2837]: E1029 20:41:35.840804 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgmxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b8476c65-tgszf_calico-apiserver(4e297194-f7fa-4d00-92d1-15f2c61b7789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:35.841596 containerd[1634]: time="2025-10-29T20:41:35.841568400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 20:41:35.842735 kubelet[2837]: E1029 20:41:35.842685 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:41:35.848654 systemd-networkd[1536]: cali9eae99f7ff1: Gained IPv6LL Oct 29 20:41:35.882897 kubelet[2837]: I1029 20:41:35.882807 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t99tj" podStartSLOduration=41.882784855 podStartE2EDuration="41.882784855s" podCreationTimestamp="2025-10-29 20:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 20:41:35.833799162 +0000 UTC m=+48.792549900" watchObservedRunningTime="2025-10-29 20:41:35.882784855 +0000 UTC m=+48.841535563" Oct 29 20:41:35.907171 systemd-networkd[1536]: calif77e2081c73: Link UP Oct 29 20:41:35.908116 systemd-networkd[1536]: calif77e2081c73: Gained carrier Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.212 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hphmg-eth0 csi-node-driver- calico-system 9b38e512-971c-4a46-9cd5-db7a73bc8089 733 0 2025-10-29 20:41:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hphmg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif77e2081c73 [] [] }} ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.213 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.263 [INFO][4828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" HandleID="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Workload="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.263 [INFO][4828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" HandleID="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Workload="localhost-k8s-csi--node--driver--hphmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hphmg", "timestamp":"2025-10-29 20:41:35.263392932 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.263 [INFO][4828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.292 [INFO][4828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.292 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.364 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.379 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.841 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.869 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.874 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.874 [INFO][4828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.880 [INFO][4828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4 Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.893 [INFO][4828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.900 [INFO][4828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.900 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" host="localhost" Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.900 [INFO][4828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 20:41:35.925476 containerd[1634]: 2025-10-29 20:41:35.900 [INFO][4828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" HandleID="k8s-pod-network.c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Workload="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.926201 containerd[1634]: 2025-10-29 20:41:35.904 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hphmg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b38e512-971c-4a46-9cd5-db7a73bc8089", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hphmg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif77e2081c73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:35.926201 containerd[1634]: 2025-10-29 20:41:35.904 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.926201 containerd[1634]: 2025-10-29 20:41:35.904 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif77e2081c73 ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.926201 containerd[1634]: 2025-10-29 20:41:35.908 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.926201 containerd[1634]: 2025-10-29 20:41:35.908 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hphmg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b38e512-971c-4a46-9cd5-db7a73bc8089", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 20, 41, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4", Pod:"csi-node-driver-hphmg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif77e2081c73", MAC:"2e:91:30:84:34:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 20:41:35.926201 containerd[1634]: 2025-10-29 20:41:35.918 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" Namespace="calico-system" Pod="csi-node-driver-hphmg" WorkloadEndpoint="localhost-k8s-csi--node--driver--hphmg-eth0" Oct 29 20:41:35.954499 containerd[1634]: time="2025-10-29T20:41:35.954421063Z" level=info msg="connecting to shim c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4" address="unix:///run/containerd/s/b765782e6656e1ed8f5809e1ad3a32391c6868853ad9ce2166548db3f9d84c9c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 20:41:35.990781 systemd[1]: Started cri-containerd-c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4.scope - libcontainer container c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4. Oct 29 20:41:36.010097 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 20:41:36.027227 containerd[1634]: time="2025-10-29T20:41:36.027177917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hphmg,Uid:9b38e512-971c-4a46-9cd5-db7a73bc8089,Namespace:calico-system,Attempt:0,} returns sandbox id \"c45ea82ec7b100dd39c96eab42ac1611908dcffefe9a1f879cb81b8cd301f8b4\"" Oct 29 20:41:36.040696 systemd-networkd[1536]: vxlan.calico: Gained IPv6LL Oct 29 20:41:36.104732 systemd-networkd[1536]: cali6af0a8c7503: Gained IPv6LL Oct 29 20:41:36.227751 containerd[1634]: time="2025-10-29T20:41:36.227623964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:36.228872 containerd[1634]: time="2025-10-29T20:41:36.228826597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 20:41:36.228934 containerd[1634]: time="2025-10-29T20:41:36.228904295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 20:41:36.229156 kubelet[2837]: E1029 20:41:36.229104 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 20:41:36.229227 kubelet[2837]: E1029 20:41:36.229166 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 20:41:36.229531 kubelet[2837]: E1029 20:41:36.229412 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0fa70a9d9e23480ead0c039f42fa6ebe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6747bb86c8-ccg9x_calico-system(e137508a-f04b-4511-9c9c-1f63f8deba2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:36.229695 containerd[1634]: time="2025-10-29T20:41:36.229515986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 20:41:36.352735 kubelet[2837]: E1029 20:41:36.352277 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:36.352898 kubelet[2837]: E1029 20:41:36.352819 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:36.352973 kubelet[2837]: E1029 20:41:36.352944 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:41:36.352973 kubelet[2837]: E1029 20:41:36.352916 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:41:36.488687 systemd-networkd[1536]: cali3d146bd8746: Gained IPv6LL Oct 29 20:41:36.561693 containerd[1634]: time="2025-10-29T20:41:36.561621123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:36.562997 containerd[1634]: time="2025-10-29T20:41:36.562940418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 20:41:36.563275 kubelet[2837]: E1029 20:41:36.563195 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 20:41:36.563275 kubelet[2837]: E1029 20:41:36.563256 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 20:41:36.563736 kubelet[2837]: E1029 20:41:36.563593 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-789st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854f888fbc-vrqqc_calico-system(cbb5467a-783c-44e1-8c0d-4dca5f58eee2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:36.565702 kubelet[2837]: E1029 20:41:36.565664 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:41:36.571834 containerd[1634]: time="2025-10-29T20:41:36.563038796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 20:41:36.571925 containerd[1634]: time="2025-10-29T20:41:36.563666428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 20:41:36.616635 systemd-networkd[1536]: calif6550db751a: Gained IPv6LL Oct 29 20:41:36.910485 containerd[1634]: time="2025-10-29T20:41:36.909952675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:36.911478 containerd[1634]: time="2025-10-29T20:41:36.911273925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 20:41:36.911478 containerd[1634]: time="2025-10-29T20:41:36.911380610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 20:41:36.911962 kubelet[2837]: E1029 20:41:36.911844 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 20:41:36.911962 kubelet[2837]: E1029 20:41:36.911929 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 20:41:36.912855 kubelet[2837]: E1029 20:41:36.912690 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:36.913247 containerd[1634]: time="2025-10-29T20:41:36.913184893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 20:41:37.064667 systemd-networkd[1536]: calif77e2081c73: Gained IPv6LL Oct 29 20:41:37.250844 containerd[1634]: time="2025-10-29T20:41:37.250794867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:37.320629 systemd-networkd[1536]: cali23db52d0235: Gained IPv6LL Oct 29 20:41:37.354205 kubelet[2837]: E1029 20:41:37.354164 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:37.354327 kubelet[2837]: E1029 20:41:37.354279 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:41:37.355396 kubelet[2837]: E1029 20:41:37.355364 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:41:37.525530 containerd[1634]: time="2025-10-29T20:41:37.525333577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 20:41:37.525530 containerd[1634]: time="2025-10-29T20:41:37.525389835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 20:41:37.525711 kubelet[2837]: E1029 20:41:37.525645 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 20:41:37.525711 kubelet[2837]: E1029 20:41:37.525690 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 20:41:37.525957 kubelet[2837]: E1029 20:41:37.525907 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6747bb86c8-ccg9x_calico-system(e137508a-f04b-4511-9c9c-1f63f8deba2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:37.526246 containerd[1634]: time="2025-10-29T20:41:37.526096638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 20:41:37.527368 kubelet[2837]: E1029 20:41:37.527295 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6747bb86c8-ccg9x" podUID="e137508a-f04b-4511-9c9c-1f63f8deba2a" Oct 29 20:41:38.142550 containerd[1634]: time="2025-10-29T20:41:38.142442593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:38.356870 kubelet[2837]: E1029 20:41:38.356794 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6747bb86c8-ccg9x" podUID="e137508a-f04b-4511-9c9c-1f63f8deba2a" Oct 29 20:41:38.374157 containerd[1634]: time="2025-10-29T20:41:38.374101850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 20:41:38.374245 containerd[1634]: time="2025-10-29T20:41:38.374171805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 20:41:38.374360 kubelet[2837]: E1029 20:41:38.374308 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 20:41:38.374401 kubelet[2837]: E1029 20:41:38.374349 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 20:41:38.374967 kubelet[2837]: E1029 20:41:38.374523 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:38.375761 kubelet[2837]: E1029 20:41:38.375714 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:38.818387 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:38584.service - OpenSSH per-connection server daemon (10.0.0.1:38584). Oct 29 20:41:38.972113 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 38584 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:41:38.974952 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:41:38.987256 systemd-logind[1604]: New session 11 of user core. Oct 29 20:41:38.995608 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 20:41:39.197089 sshd[4987]: Connection closed by 10.0.0.1 port 38584 Oct 29 20:41:39.197579 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:39.201812 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:38584.service: Deactivated successfully. Oct 29 20:41:39.204142 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 20:41:39.205031 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Oct 29 20:41:39.206254 systemd-logind[1604]: Removed session 11. Oct 29 20:41:39.358833 kubelet[2837]: E1029 20:41:39.358760 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:44.214428 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:43498.service - OpenSSH per-connection server daemon (10.0.0.1:43498). Oct 29 20:41:44.276743 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 43498 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:41:44.279059 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:41:44.283361 systemd-logind[1604]: New session 12 of user core. Oct 29 20:41:44.291583 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 20:41:44.364040 sshd[5015]: Connection closed by 10.0.0.1 port 43498 Oct 29 20:41:44.364402 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:44.369052 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:43498.service: Deactivated successfully. Oct 29 20:41:44.371055 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 20:41:44.371807 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Oct 29 20:41:44.372837 systemd-logind[1604]: Removed session 12. Oct 29 20:41:47.135877 containerd[1634]: time="2025-10-29T20:41:47.135790851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 20:41:47.668107 containerd[1634]: time="2025-10-29T20:41:47.668011813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:47.669596 containerd[1634]: time="2025-10-29T20:41:47.669556769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 20:41:47.669676 containerd[1634]: time="2025-10-29T20:41:47.669644947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 20:41:47.669890 kubelet[2837]: E1029 20:41:47.669810 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 20:41:47.670265 kubelet[2837]: E1029 20:41:47.669914 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 20:41:47.670265 kubelet[2837]: E1029 20:41:47.670102 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w4qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bqbs9_calico-system(d50f9e75-9f55-47f4-8506-1ec4da0d8068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:47.671445 kubelet[2837]: E1029 20:41:47.671379 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:41:48.136024 containerd[1634]: time="2025-10-29T20:41:48.135972016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 20:41:48.475716 containerd[1634]: time="2025-10-29T20:41:48.475653976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:48.518563 containerd[1634]: time="2025-10-29T20:41:48.518445903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 20:41:48.518563 containerd[1634]: time="2025-10-29T20:41:48.518518722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 20:41:48.518841 kubelet[2837]: E1029 20:41:48.518786 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:48.518889 kubelet[2837]: E1029 20:41:48.518845 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:48.519036 kubelet[2837]: E1029 20:41:48.518994 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgmxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b8476c65-tgszf_calico-apiserver(4e297194-f7fa-4d00-92d1-15f2c61b7789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:48.520200 kubelet[2837]: E1029 20:41:48.520162 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:41:49.136075 containerd[1634]: time="2025-10-29T20:41:49.135742440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 20:41:49.382869 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:43506.service - OpenSSH per-connection server daemon (10.0.0.1:43506). Oct 29 20:41:49.444897 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 43506 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:41:49.447052 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:41:49.452261 systemd-logind[1604]: New session 13 of user core. Oct 29 20:41:49.462668 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 20:41:49.497661 containerd[1634]: time="2025-10-29T20:41:49.497579945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:49.585740 containerd[1634]: time="2025-10-29T20:41:49.585656058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 20:41:49.585908 containerd[1634]: time="2025-10-29T20:41:49.585688520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 20:41:49.586054 kubelet[2837]: E1029 20:41:49.586003 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:49.586054 kubelet[2837]: E1029 20:41:49.586056 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:41:49.586622 kubelet[2837]: E1029 20:41:49.586319 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps94f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b8476c65-tgp87_calico-apiserver(deaef94c-fe21-40af-a068-2605dc363c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:49.586742 containerd[1634]: time="2025-10-29T20:41:49.586419285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 20:41:49.587552 kubelet[2837]: E1029 20:41:49.587516 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:41:49.589530 sshd[5044]: Connection closed by 10.0.0.1 port 43506 Oct 29 20:41:49.589856 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:49.594973 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:43506.service: Deactivated successfully. Oct 29 20:41:49.597234 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 20:41:49.598239 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Oct 29 20:41:49.600337 systemd-logind[1604]: Removed session 13. Oct 29 20:41:49.960655 containerd[1634]: time="2025-10-29T20:41:49.960575904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:49.961901 containerd[1634]: time="2025-10-29T20:41:49.961855819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 20:41:49.961965 containerd[1634]: time="2025-10-29T20:41:49.961905103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 20:41:49.962237 kubelet[2837]: E1029 20:41:49.962161 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 20:41:49.962323 kubelet[2837]: E1029 20:41:49.962242 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 20:41:49.962482 kubelet[2837]: E1029 20:41:49.962417 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-789st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854f888fbc-vrqqc_calico-system(cbb5467a-783c-44e1-8c0d-4dca5f58eee2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:49.963786 kubelet[2837]: E1029 20:41:49.963684 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:41:51.137824 containerd[1634]: time="2025-10-29T20:41:51.137774422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 20:41:51.525915 containerd[1634]: time="2025-10-29T20:41:51.525851111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:51.634199 containerd[1634]: time="2025-10-29T20:41:51.634131772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 20:41:51.634343 containerd[1634]: time="2025-10-29T20:41:51.634192006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 20:41:51.634417 kubelet[2837]: E1029 20:41:51.634359 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 20:41:51.634790 kubelet[2837]: E1029 20:41:51.634424 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 20:41:51.634790 kubelet[2837]: E1029 20:41:51.634731 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:51.634908 containerd[1634]: time="2025-10-29T20:41:51.634855533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 20:41:52.091098 containerd[1634]: time="2025-10-29T20:41:52.091027348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:52.103394 containerd[1634]: time="2025-10-29T20:41:52.103290027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 20:41:52.103585 containerd[1634]: time="2025-10-29T20:41:52.103398222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 20:41:52.103635 kubelet[2837]: E1029 20:41:52.103604 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 20:41:52.103689 kubelet[2837]: E1029 20:41:52.103656 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 20:41:52.104027 containerd[1634]: time="2025-10-29T20:41:52.103996697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 20:41:52.104165 kubelet[2837]: E1029 20:41:52.103963 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0fa70a9d9e23480ead0c039f42fa6ebe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6747bb86c8-ccg9x_calico-system(e137508a-f04b-4511-9c9c-1f63f8deba2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:52.485639 containerd[1634]: time="2025-10-29T20:41:52.485570186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:52.504049 containerd[1634]: time="2025-10-29T20:41:52.503986543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 20:41:52.504189 containerd[1634]: time="2025-10-29T20:41:52.504033762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 20:41:52.504387 kubelet[2837]: E1029 20:41:52.504312 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 20:41:52.504446 kubelet[2837]: E1029 20:41:52.504402 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 20:41:52.504835 containerd[1634]: time="2025-10-29T20:41:52.504801979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 20:41:52.504927 kubelet[2837]: E1029 20:41:52.504749 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:52.506369 kubelet[2837]: E1029 20:41:52.506318 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:41:52.844730 containerd[1634]: time="2025-10-29T20:41:52.844557942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:41:52.846032 containerd[1634]: time="2025-10-29T20:41:52.845941987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 20:41:52.846239 containerd[1634]: time="2025-10-29T20:41:52.845944692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 20:41:52.846285 kubelet[2837]: E1029 20:41:52.846221 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 20:41:52.846757 kubelet[2837]: E1029 20:41:52.846283 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 20:41:52.846757 kubelet[2837]: E1029 20:41:52.846483 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6747bb86c8-ccg9x_calico-system(e137508a-f04b-4511-9c9c-1f63f8deba2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 20:41:52.847753 kubelet[2837]: E1029 20:41:52.847667 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6747bb86c8-ccg9x" podUID="e137508a-f04b-4511-9c9c-1f63f8deba2a" Oct 29 20:41:54.602937 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:34798.service - OpenSSH per-connection server daemon (10.0.0.1:34798). Oct 29 20:41:54.666056 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 34798 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:41:54.669050 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:41:54.673857 systemd-logind[1604]: New session 14 of user core. Oct 29 20:41:54.680621 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 20:41:54.766481 sshd[5062]: Connection closed by 10.0.0.1 port 34798 Oct 29 20:41:54.766808 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:54.776837 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:34798.service: Deactivated successfully. Oct 29 20:41:54.778922 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 20:41:54.779860 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Oct 29 20:41:54.782739 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:34800.service - OpenSSH per-connection server daemon (10.0.0.1:34800). Oct 29 20:41:54.783695 systemd-logind[1604]: Removed session 14. Oct 29 20:41:54.858552 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 34800 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:41:54.861023 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:41:54.865577 systemd-logind[1604]: New session 15 of user core. Oct 29 20:41:54.875665 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 20:41:55.005668 sshd[5081]: Connection closed by 10.0.0.1 port 34800 Oct 29 20:41:55.006199 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:55.015693 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:34800.service: Deactivated successfully. Oct 29 20:41:55.020012 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 20:41:55.021344 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Oct 29 20:41:55.027878 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:34808.service - OpenSSH per-connection server daemon (10.0.0.1:34808). Oct 29 20:41:55.029919 systemd-logind[1604]: Removed session 15. Oct 29 20:41:55.118001 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 34808 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:41:55.120372 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:41:55.126671 systemd-logind[1604]: New session 16 of user core. Oct 29 20:41:55.135645 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 20:41:55.223148 sshd[5099]: Connection closed by 10.0.0.1 port 34808 Oct 29 20:41:55.223675 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Oct 29 20:41:55.229090 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:34808.service: Deactivated successfully. Oct 29 20:41:55.231496 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 20:41:55.232403 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Oct 29 20:41:55.233825 systemd-logind[1604]: Removed session 16. Oct 29 20:42:00.135111 kubelet[2837]: E1029 20:42:00.135045 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:42:00.135740 kubelet[2837]: E1029 20:42:00.135164 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:42:00.238222 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:58178.service - OpenSSH per-connection server daemon (10.0.0.1:58178). Oct 29 20:42:00.300213 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 58178 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:00.302828 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:00.308444 systemd-logind[1604]: New session 17 of user core. Oct 29 20:42:00.321634 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 20:42:00.399704 sshd[5124]: Connection closed by 10.0.0.1 port 58178 Oct 29 20:42:00.399934 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:00.404934 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:58178.service: Deactivated successfully. Oct 29 20:42:00.407268 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 20:42:00.408178 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Oct 29 20:42:00.409236 systemd-logind[1604]: Removed session 17. Oct 29 20:42:01.142645 kubelet[2837]: E1029 20:42:01.142067 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:42:01.143348 kubelet[2837]: E1029 20:42:01.142682 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:42:01.143348 kubelet[2837]: E1029 20:42:01.143297 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:42:03.386842 containerd[1634]: time="2025-10-29T20:42:03.386798636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\" id:\"e64481921f39b987a2c98190003761db3fb2c0ee9bbe30c06e15ff8e30159e7a\" pid:5148 exited_at:{seconds:1761770523 nanos:386495180}" Oct 29 20:42:03.388762 kubelet[2837]: E1029 20:42:03.388725 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:42:05.421431 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:58194.service - OpenSSH per-connection server daemon (10.0.0.1:58194). Oct 29 20:42:05.509600 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 58194 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:05.512479 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:05.517749 systemd-logind[1604]: New session 18 of user core. Oct 29 20:42:05.532664 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 20:42:05.624493 sshd[5171]: Connection closed by 10.0.0.1 port 58194 Oct 29 20:42:05.624879 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:05.631058 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:58194.service: Deactivated successfully. Oct 29 20:42:05.633309 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 20:42:05.634221 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Oct 29 20:42:05.635795 systemd-logind[1604]: Removed session 18. Oct 29 20:42:06.136958 kubelet[2837]: E1029 20:42:06.136870 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:42:06.137794 kubelet[2837]: E1029 20:42:06.137033 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6747bb86c8-ccg9x" podUID="e137508a-f04b-4511-9c9c-1f63f8deba2a" Oct 29 20:42:10.134138 kubelet[2837]: E1029 20:42:10.134094 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:42:10.640491 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:36438.service - OpenSSH per-connection server daemon (10.0.0.1:36438). Oct 29 20:42:10.707964 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:10.710698 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:10.715974 systemd-logind[1604]: New session 19 of user core. Oct 29 20:42:10.729651 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 20:42:10.817053 sshd[5189]: Connection closed by 10.0.0.1 port 36438 Oct 29 20:42:10.817399 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:10.822492 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:36438.service: Deactivated successfully. Oct 29 20:42:10.824526 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 20:42:10.825291 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Oct 29 20:42:10.826444 systemd-logind[1604]: Removed session 19. Oct 29 20:42:13.136790 containerd[1634]: time="2025-10-29T20:42:13.136609781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 20:42:13.519233 containerd[1634]: time="2025-10-29T20:42:13.519172104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:13.520500 containerd[1634]: time="2025-10-29T20:42:13.520437580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 20:42:13.520566 containerd[1634]: time="2025-10-29T20:42:13.520509988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 20:42:13.520806 kubelet[2837]: E1029 20:42:13.520739 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:42:13.521167 kubelet[2837]: E1029 20:42:13.520821 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:42:13.521167 kubelet[2837]: E1029 20:42:13.521013 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgmxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b8476c65-tgszf_calico-apiserver(4e297194-f7fa-4d00-92d1-15f2c61b7789): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:13.522260 kubelet[2837]: E1029 20:42:13.522224 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:42:14.135552 containerd[1634]: time="2025-10-29T20:42:14.135485782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 20:42:14.498104 containerd[1634]: time="2025-10-29T20:42:14.498049955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:14.508261 containerd[1634]: time="2025-10-29T20:42:14.508216268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 20:42:14.508261 containerd[1634]: time="2025-10-29T20:42:14.508253569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 20:42:14.508512 kubelet[2837]: E1029 20:42:14.508446 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:42:14.508570 kubelet[2837]: E1029 20:42:14.508517 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 20:42:14.508799 containerd[1634]: time="2025-10-29T20:42:14.508763418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 20:42:14.508986 kubelet[2837]: E1029 20:42:14.508771 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps94f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b8476c65-tgp87_calico-apiserver(deaef94c-fe21-40af-a068-2605dc363c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:14.510046 kubelet[2837]: E1029 20:42:14.509998 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:42:14.889614 containerd[1634]: time="2025-10-29T20:42:14.889437539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:14.890864 containerd[1634]: time="2025-10-29T20:42:14.890815399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 20:42:14.890934 containerd[1634]: time="2025-10-29T20:42:14.890862398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 20:42:14.891166 kubelet[2837]: E1029 20:42:14.891098 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 20:42:14.891522 kubelet[2837]: E1029 20:42:14.891166 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 20:42:14.891522 kubelet[2837]: E1029 20:42:14.891343 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-789st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854f888fbc-vrqqc_calico-system(cbb5467a-783c-44e1-8c0d-4dca5f58eee2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:14.892648 kubelet[2837]: E1029 20:42:14.892607 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:42:15.136472 containerd[1634]: time="2025-10-29T20:42:15.136084358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 20:42:15.463739 containerd[1634]: time="2025-10-29T20:42:15.463678679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:15.464848 containerd[1634]: time="2025-10-29T20:42:15.464772429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 20:42:15.464848 containerd[1634]: time="2025-10-29T20:42:15.464805182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 20:42:15.465054 kubelet[2837]: E1029 20:42:15.464947 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 20:42:15.465054 kubelet[2837]: E1029 20:42:15.464999 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 20:42:15.465221 kubelet[2837]: E1029 20:42:15.465147 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w4qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bqbs9_calico-system(d50f9e75-9f55-47f4-8506-1ec4da0d8068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:15.466417 kubelet[2837]: E1029 20:42:15.466357 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:42:15.830260 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:36444.service - OpenSSH per-connection server daemon (10.0.0.1:36444). Oct 29 20:42:15.884738 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 36444 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:15.887041 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:15.892159 systemd-logind[1604]: New session 20 of user core. Oct 29 20:42:15.903631 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 20:42:15.976351 sshd[5212]: Connection closed by 10.0.0.1 port 36444 Oct 29 20:42:15.976687 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:15.988465 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:36444.service: Deactivated successfully. Oct 29 20:42:15.990722 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 20:42:15.991846 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Oct 29 20:42:15.995344 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:36446.service - OpenSSH per-connection server daemon (10.0.0.1:36446). Oct 29 20:42:15.996794 systemd-logind[1604]: Removed session 20. Oct 29 20:42:16.057823 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 36446 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:16.060072 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:16.064691 systemd-logind[1604]: New session 21 of user core. Oct 29 20:42:16.075661 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 20:42:16.327694 sshd[5232]: Connection closed by 10.0.0.1 port 36446 Oct 29 20:42:16.328164 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:16.339099 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:36446.service: Deactivated successfully. Oct 29 20:42:16.341056 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 20:42:16.341781 systemd-logind[1604]: Session 21 logged out. Waiting for processes to exit. Oct 29 20:42:16.344938 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:36452.service - OpenSSH per-connection server daemon (10.0.0.1:36452). Oct 29 20:42:16.345636 systemd-logind[1604]: Removed session 21. Oct 29 20:42:16.410717 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 36452 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:16.413164 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:16.417645 systemd-logind[1604]: New session 22 of user core. Oct 29 20:42:16.431584 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 20:42:16.913417 sshd[5248]: Connection closed by 10.0.0.1 port 36452 Oct 29 20:42:16.910289 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:16.923425 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:36452.service: Deactivated successfully. Oct 29 20:42:16.928280 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 20:42:16.931688 systemd-logind[1604]: Session 22 logged out. Waiting for processes to exit. Oct 29 20:42:16.935538 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:36458.service - OpenSSH per-connection server daemon (10.0.0.1:36458). Oct 29 20:42:16.937611 systemd-logind[1604]: Removed session 22. Oct 29 20:42:17.025669 sshd[5268]: Accepted publickey for core from 10.0.0.1 port 36458 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:17.028067 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:17.033366 systemd-logind[1604]: New session 23 of user core. Oct 29 20:42:17.044585 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 20:42:17.239767 sshd[5272]: Connection closed by 10.0.0.1 port 36458 Oct 29 20:42:17.240609 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:17.251631 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:36458.service: Deactivated successfully. Oct 29 20:42:17.254247 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 20:42:17.255367 systemd-logind[1604]: Session 23 logged out. Waiting for processes to exit. Oct 29 20:42:17.258825 systemd-logind[1604]: Removed session 23. Oct 29 20:42:17.260044 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:36462.service - OpenSSH per-connection server daemon (10.0.0.1:36462). Oct 29 20:42:17.319289 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 36462 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:17.321499 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:17.326340 systemd-logind[1604]: New session 24 of user core. Oct 29 20:42:17.335619 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 20:42:17.416870 sshd[5288]: Connection closed by 10.0.0.1 port 36462 Oct 29 20:42:17.417183 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:17.422325 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:36462.service: Deactivated successfully. Oct 29 20:42:17.424375 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 20:42:17.425123 systemd-logind[1604]: Session 24 logged out. Waiting for processes to exit. Oct 29 20:42:17.426378 systemd-logind[1604]: Removed session 24. Oct 29 20:42:19.134910 kubelet[2837]: E1029 20:42:19.134810 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:42:19.136651 containerd[1634]: time="2025-10-29T20:42:19.136614914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 20:42:19.515598 containerd[1634]: time="2025-10-29T20:42:19.515527906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:19.516944 containerd[1634]: time="2025-10-29T20:42:19.516908662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 20:42:19.517012 containerd[1634]: time="2025-10-29T20:42:19.516947486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 20:42:19.517206 kubelet[2837]: E1029 20:42:19.517154 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 20:42:19.517268 kubelet[2837]: E1029 20:42:19.517216 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 20:42:19.517628 containerd[1634]: time="2025-10-29T20:42:19.517595610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 20:42:19.517677 kubelet[2837]: E1029 20:42:19.517604 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0fa70a9d9e23480ead0c039f42fa6ebe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6747bb86c8-ccg9x_calico-system(e137508a-f04b-4511-9c9c-1f63f8deba2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:19.890522 containerd[1634]: time="2025-10-29T20:42:19.890374060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:19.895404 containerd[1634]: time="2025-10-29T20:42:19.895349726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 20:42:19.895479 containerd[1634]: time="2025-10-29T20:42:19.895428957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 20:42:19.895641 kubelet[2837]: E1029 20:42:19.895599 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 20:42:19.895704 kubelet[2837]: E1029 20:42:19.895665 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 20:42:19.895980 kubelet[2837]: E1029 20:42:19.895935 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:19.896138 containerd[1634]: time="2025-10-29T20:42:19.895965037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 20:42:20.303609 containerd[1634]: time="2025-10-29T20:42:20.303525232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:20.343673 containerd[1634]: time="2025-10-29T20:42:20.343600479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 20:42:20.343823 containerd[1634]: time="2025-10-29T20:42:20.343626739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 20:42:20.343887 kubelet[2837]: E1029 20:42:20.343843 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 20:42:20.344272 kubelet[2837]: E1029 20:42:20.343902 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 20:42:20.344272 kubelet[2837]: E1029 20:42:20.344142 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6747bb86c8-ccg9x_calico-system(e137508a-f04b-4511-9c9c-1f63f8deba2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:20.344381 containerd[1634]: time="2025-10-29T20:42:20.344314307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 20:42:20.345652 kubelet[2837]: E1029 20:42:20.345592 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6747bb86c8-ccg9x" podUID="e137508a-f04b-4511-9c9c-1f63f8deba2a" Oct 29 20:42:20.759908 containerd[1634]: time="2025-10-29T20:42:20.759840844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 20:42:20.761109 containerd[1634]: time="2025-10-29T20:42:20.761061627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 20:42:20.761208 containerd[1634]: time="2025-10-29T20:42:20.761153682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 20:42:20.761438 kubelet[2837]: E1029 20:42:20.761387 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 20:42:20.761521 kubelet[2837]: E1029 20:42:20.761479 2837 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 20:42:20.761709 kubelet[2837]: E1029 20:42:20.761651 2837 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klbvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hphmg_calico-system(9b38e512-971c-4a46-9cd5-db7a73bc8089): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 20:42:20.762921 kubelet[2837]: E1029 20:42:20.762859 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:42:22.429333 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:51012.service - OpenSSH per-connection server daemon (10.0.0.1:51012). Oct 29 20:42:22.486132 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 51012 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:22.489133 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:22.493683 systemd-logind[1604]: New session 25 of user core. Oct 29 20:42:22.501612 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 29 20:42:22.572126 sshd[5307]: Connection closed by 10.0.0.1 port 51012 Oct 29 20:42:22.572530 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:22.576817 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:51012.service: Deactivated successfully. Oct 29 20:42:22.578765 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 20:42:22.579589 systemd-logind[1604]: Session 25 logged out. Waiting for processes to exit. Oct 29 20:42:22.580782 systemd-logind[1604]: Removed session 25. Oct 29 20:42:26.135068 kubelet[2837]: E1029 20:42:26.134996 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgp87" podUID="deaef94c-fe21-40af-a068-2605dc363c66" Oct 29 20:42:27.588601 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:51014.service - OpenSSH per-connection server daemon (10.0.0.1:51014). Oct 29 20:42:27.645675 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 51014 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:27.648127 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:27.652655 systemd-logind[1604]: New session 26 of user core. Oct 29 20:42:27.663588 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 29 20:42:27.737696 sshd[5326]: Connection closed by 10.0.0.1 port 51014 Oct 29 20:42:27.738044 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:27.743220 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:51014.service: Deactivated successfully. Oct 29 20:42:27.745614 systemd[1]: session-26.scope: Deactivated successfully. Oct 29 20:42:27.746476 systemd-logind[1604]: Session 26 logged out. Waiting for processes to exit. Oct 29 20:42:27.749201 systemd-logind[1604]: Removed session 26. Oct 29 20:42:28.134840 kubelet[2837]: E1029 20:42:28.134755 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bqbs9" podUID="d50f9e75-9f55-47f4-8506-1ec4da0d8068" Oct 29 20:42:28.134840 kubelet[2837]: E1029 20:42:28.134811 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854f888fbc-vrqqc" podUID="cbb5467a-783c-44e1-8c0d-4dca5f58eee2" Oct 29 20:42:29.134620 kubelet[2837]: E1029 20:42:29.134565 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:42:29.135593 kubelet[2837]: E1029 20:42:29.135559 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b8476c65-tgszf" podUID="4e297194-f7fa-4d00-92d1-15f2c61b7789" Oct 29 20:42:32.134268 kubelet[2837]: E1029 20:42:32.134205 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 20:42:32.135231 kubelet[2837]: E1029 20:42:32.135160 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6747bb86c8-ccg9x" podUID="e137508a-f04b-4511-9c9c-1f63f8deba2a" Oct 29 20:42:32.135566 kubelet[2837]: E1029 20:42:32.135527 2837 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hphmg" podUID="9b38e512-971c-4a46-9cd5-db7a73bc8089" Oct 29 20:42:32.762415 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:55048.service - OpenSSH per-connection server daemon (10.0.0.1:55048). Oct 29 20:42:32.822678 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 55048 ssh2: RSA SHA256:dkIpYYqJN40baCcByDCDALN+VP6SH9Z/EqzJLuCXJOM Oct 29 20:42:32.824830 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 20:42:32.829279 systemd-logind[1604]: New session 27 of user core. Oct 29 20:42:32.834651 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 29 20:42:32.904735 sshd[5343]: Connection closed by 10.0.0.1 port 55048 Oct 29 20:42:32.905036 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Oct 29 20:42:32.909278 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:55048.service: Deactivated successfully. Oct 29 20:42:32.911522 systemd[1]: session-27.scope: Deactivated successfully. Oct 29 20:42:32.912356 systemd-logind[1604]: Session 27 logged out. Waiting for processes to exit. Oct 29 20:42:32.913676 systemd-logind[1604]: Removed session 27. Oct 29 20:42:33.390232 containerd[1634]: time="2025-10-29T20:42:33.390169833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98380300b27e99bacf36a2b5bce92fce2760bf1ef2b817246811c6a86823cd7c\" id:\"6461e0a759f82021555d3aab4c93db23471ca0273f1448ba24bea36d4b2b84ff\" pid:5367 exited_at:{seconds:1761770553 nanos:389842690}"