Nov 6 00:34:17.310880 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:11:41 -00 2025 Nov 6 00:34:17.310912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:34:17.310929 kernel: BIOS-provided physical RAM map: Nov 6 00:34:17.310938 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:34:17.310947 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:34:17.310957 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:34:17.310968 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 6 00:34:17.310977 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 6 00:34:17.310990 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 00:34:17.311004 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 00:34:17.311013 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:34:17.311023 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:34:17.311032 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:34:17.311042 kernel: NX (Execute Disable) protection: active Nov 6 00:34:17.311056 kernel: APIC: Static calls initialized Nov 6 00:34:17.311067 kernel: SMBIOS 2.8 present. Nov 6 00:34:17.311081 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 6 00:34:17.311091 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:34:17.311101 kernel: Hypervisor detected: KVM Nov 6 00:34:17.311110 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:34:17.311119 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:34:17.311129 kernel: kvm-clock: using sched offset of 4178882728 cycles Nov 6 00:34:17.311139 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:34:17.311149 kernel: tsc: Detected 2794.748 MHz processor Nov 6 00:34:17.311164 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:34:17.311175 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:34:17.311185 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 6 00:34:17.311196 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:34:17.311207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:34:17.311217 kernel: Using GB pages for direct mapping Nov 6 00:34:17.311228 kernel: ACPI: Early table checksum verification disabled Nov 6 00:34:17.311252 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 6 00:34:17.311263 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311273 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311284 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311294 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 6 00:34:17.311305 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311315 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311330 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311340 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:34:17.311356 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 6 00:34:17.311367 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 6 00:34:17.311378 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 6 00:34:17.311392 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 6 00:34:17.311403 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 6 00:34:17.311414 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 6 00:34:17.311424 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 6 00:34:17.311435 kernel: No NUMA configuration found Nov 6 00:34:17.311445 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 6 00:34:17.311459 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 6 00:34:17.311470 kernel: Zone ranges: Nov 6 00:34:17.311481 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:34:17.311491 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 6 00:34:17.311502 kernel: Normal empty Nov 6 00:34:17.311512 kernel: Device empty Nov 6 00:34:17.311523 kernel: Movable zone start for each node Nov 6 00:34:17.311533 kernel: Early memory node ranges Nov 6 00:34:17.311548 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:34:17.311559 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 6 00:34:17.311570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 6 00:34:17.311581 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:34:17.311592 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:34:17.311603 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 00:34:17.311619 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:34:17.311630 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:34:17.311644 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:34:17.311655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:34:17.311669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:34:17.311680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:34:17.311691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:34:17.311702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:34:17.311712 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:34:17.311725 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:34:17.311733 kernel: TSC deadline timer available Nov 6 00:34:17.311741 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:34:17.311749 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:34:17.311757 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:34:17.311766 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:34:17.311776 kernel: CPU topo: Num. cores per package: 4 Nov 6 00:34:17.311788 kernel: CPU topo: Num. threads per package: 4 Nov 6 00:34:17.311798 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 00:34:17.311808 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:34:17.311818 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:34:17.311827 kernel: kvm-guest: setup PV sched yield Nov 6 00:34:17.311837 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 00:34:17.311868 kernel: Booting paravirtualized kernel on KVM Nov 6 00:34:17.311879 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:34:17.311893 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 00:34:17.311903 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 00:34:17.311913 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 00:34:17.311923 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 00:34:17.311933 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:34:17.311943 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:34:17.311955 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:34:17.311967 kernel: random: crng init done Nov 6 00:34:17.311975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:34:17.311983 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:34:17.311991 kernel: Fallback order for Node 0: 0 Nov 6 00:34:17.311999 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 6 00:34:17.312007 kernel: Policy zone: DMA32 Nov 6 00:34:17.312015 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:34:17.312026 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 00:34:17.312036 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 00:34:17.312046 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:34:17.312057 kernel: Dynamic Preempt: voluntary Nov 6 00:34:17.312068 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:34:17.312080 kernel: rcu: RCU event tracing is enabled. Nov 6 00:34:17.312091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 00:34:17.312106 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:34:17.312121 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:34:17.312132 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:34:17.312143 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:34:17.312154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 00:34:17.312166 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:34:17.312177 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:34:17.312192 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:34:17.312203 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 00:34:17.312214 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:34:17.312234 kernel: Console: colour VGA+ 80x25 Nov 6 00:34:17.312257 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:34:17.312268 kernel: ACPI: Core revision 20240827 Nov 6 00:34:17.312280 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:34:17.312291 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:34:17.312302 kernel: x2apic enabled Nov 6 00:34:17.312314 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:34:17.312333 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:34:17.312345 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:34:17.312356 kernel: kvm-guest: setup PV IPIs Nov 6 00:34:17.312371 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:34:17.312380 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:34:17.312388 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 00:34:17.312397 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:34:17.312405 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:34:17.312414 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:34:17.312422 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:34:17.312433 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:34:17.312442 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:34:17.312450 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:34:17.312458 kernel: active return thunk: retbleed_return_thunk Nov 6 00:34:17.312467 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:34:17.312475 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:34:17.312484 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:34:17.312495 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:34:17.312504 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:34:17.312512 kernel: active return thunk: srso_return_thunk Nov 6 00:34:17.312521 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:34:17.312530 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:34:17.312541 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:34:17.312553 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:34:17.312567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:34:17.312577 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:34:17.312585 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:34:17.312594 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:34:17.312602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:34:17.312610 kernel: landlock: Up and running. Nov 6 00:34:17.312618 kernel: SELinux: Initializing. Nov 6 00:34:17.312632 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:34:17.312641 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:34:17.312650 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:34:17.312658 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:34:17.312668 kernel: ... version: 0 Nov 6 00:34:17.312679 kernel: ... bit width: 48 Nov 6 00:34:17.312689 kernel: ... generic registers: 6 Nov 6 00:34:17.312702 kernel: ... value mask: 0000ffffffffffff Nov 6 00:34:17.312713 kernel: ... max period: 00007fffffffffff Nov 6 00:34:17.312723 kernel: ... fixed-purpose events: 0 Nov 6 00:34:17.312733 kernel: ... event mask: 000000000000003f Nov 6 00:34:17.312744 kernel: signal: max sigframe size: 1776 Nov 6 00:34:17.312754 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:34:17.312765 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:34:17.312776 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:34:17.312789 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:34:17.312799 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:34:17.312810 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 00:34:17.312818 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 00:34:17.312827 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 00:34:17.312835 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114376K reserved, 0K cma-reserved) Nov 6 00:34:17.312844 kernel: devtmpfs: initialized Nov 6 00:34:17.312890 kernel: x86/mm: Memory block size: 128MB Nov 6 00:34:17.312898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:34:17.312907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 00:34:17.312915 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:34:17.312924 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:34:17.312932 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:34:17.312940 kernel: audit: type=2000 audit(1762389255.247:1): state=initialized audit_enabled=0 res=1 Nov 6 00:34:17.312951 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:34:17.312959 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:34:17.312968 kernel: cpuidle: using governor menu Nov 6 00:34:17.312976 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:34:17.312985 kernel: dca service started, version 1.12.1 Nov 6 00:34:17.312993 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 00:34:17.313002 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 00:34:17.313012 kernel: PCI: Using configuration type 1 for base access Nov 6 00:34:17.313020 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:34:17.313029 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:34:17.313037 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:34:17.313045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:34:17.313053 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:34:17.313062 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:34:17.313072 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:34:17.313080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:34:17.313089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:34:17.313097 kernel: ACPI: Interpreter enabled Nov 6 00:34:17.313105 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:34:17.313114 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:34:17.313122 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:34:17.313132 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:34:17.313141 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:34:17.313149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:34:17.313588 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:34:17.314233 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:34:17.314937 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:34:17.314958 kernel: PCI host bridge to bus 0000:00 Nov 6 00:34:17.315143 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:34:17.315317 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:34:17.315478 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:34:17.315650 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 6 00:34:17.315835 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:34:17.316111 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 00:34:17.316310 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:34:17.316511 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:34:17.316723 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:34:17.316941 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 6 00:34:17.317153 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 6 00:34:17.317401 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 6 00:34:17.317648 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:34:17.317935 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:34:17.318121 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 00:34:17.318311 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 6 00:34:17.318493 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 6 00:34:17.318695 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:34:17.318911 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 6 00:34:17.319093 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 6 00:34:17.319292 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 6 00:34:17.319515 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:34:17.319754 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 6 00:34:17.319973 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 6 00:34:17.320151 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 6 00:34:17.320337 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 6 00:34:17.320522 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:34:17.320716 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:34:17.320959 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:34:17.321162 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 6 00:34:17.321404 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 6 00:34:17.321641 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:34:17.321875 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 00:34:17.321888 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:34:17.321897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:34:17.321906 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:34:17.321914 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:34:17.321923 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:34:17.321932 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:34:17.321944 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:34:17.321953 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:34:17.321961 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:34:17.321969 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:34:17.321978 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:34:17.321986 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:34:17.321994 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:34:17.322005 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:34:17.322013 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:34:17.322022 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:34:17.322031 kernel: iommu: Default domain type: Translated Nov 6 00:34:17.322039 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:34:17.322047 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:34:17.322056 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:34:17.322064 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:34:17.322075 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 6 00:34:17.322288 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:34:17.322471 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:34:17.322670 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:34:17.322685 kernel: vgaarb: loaded Nov 6 00:34:17.322694 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:34:17.322707 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:34:17.322715 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:34:17.322724 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:34:17.322732 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:34:17.322740 kernel: pnp: PnP ACPI init Nov 6 00:34:17.322960 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 00:34:17.322973 kernel: pnp: PnP ACPI: found 6 devices Nov 6 00:34:17.322986 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:34:17.322994 kernel: NET: Registered PF_INET protocol family Nov 6 00:34:17.323003 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:34:17.323011 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:34:17.323020 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:34:17.323028 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:34:17.323037 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:34:17.323048 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:34:17.323057 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:34:17.323073 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:34:17.323087 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:34:17.323098 kernel: NET: Registered PF_XDP protocol family Nov 6 00:34:17.323321 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:34:17.323529 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:34:17.323743 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:34:17.323986 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 6 00:34:17.324186 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 00:34:17.324377 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 00:34:17.324390 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:34:17.324399 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:34:17.324413 kernel: Initialise system trusted keyrings Nov 6 00:34:17.324422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:34:17.324431 kernel: Key type asymmetric registered Nov 6 00:34:17.324439 kernel: Asymmetric key parser 'x509' registered Nov 6 00:34:17.324448 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:34:17.324456 kernel: io scheduler mq-deadline registered Nov 6 00:34:17.324465 kernel: io scheduler kyber registered Nov 6 00:34:17.324476 kernel: io scheduler bfq registered Nov 6 00:34:17.324484 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:34:17.324493 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:34:17.324502 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:34:17.324510 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 00:34:17.324519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:34:17.324527 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:34:17.324536 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:34:17.324547 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:34:17.324555 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:34:17.324736 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 00:34:17.324749 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:34:17.324981 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 00:34:17.325156 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T00:34:15 UTC (1762389255) Nov 6 00:34:17.325341 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 00:34:17.325353 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:34:17.325362 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:34:17.325370 kernel: Segment Routing with IPv6 Nov 6 00:34:17.325379 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:34:17.325387 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:34:17.325396 kernel: Key type dns_resolver registered Nov 6 00:34:17.325408 kernel: IPI shorthand broadcast: enabled Nov 6 00:34:17.325417 kernel: sched_clock: Marking stable (1289003272, 209029858)->(1609632385, -111599255) Nov 6 00:34:17.325425 kernel: registered taskstats version 1 Nov 6 00:34:17.325434 kernel: Loading compiled-in X.509 certificates Nov 6 00:34:17.325442 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 92154d1aa04a8c1424f65981683e67110e07d121' Nov 6 00:34:17.325451 kernel: Demotion targets for Node 0: null Nov 6 00:34:17.325459 kernel: Key type .fscrypt registered Nov 6 00:34:17.325470 kernel: Key type fscrypt-provisioning registered Nov 6 00:34:17.325478 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:34:17.325487 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:34:17.325496 kernel: ima: No architecture policies found Nov 6 00:34:17.325504 kernel: clk: Disabling unused clocks Nov 6 00:34:17.325513 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 6 00:34:17.325521 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:34:17.325532 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 00:34:17.325541 kernel: Run /init as init process Nov 6 00:34:17.325549 kernel: with arguments: Nov 6 00:34:17.325558 kernel: /init Nov 6 00:34:17.325566 kernel: with environment: Nov 6 00:34:17.325574 kernel: HOME=/ Nov 6 00:34:17.325582 kernel: TERM=linux Nov 6 00:34:17.325591 kernel: SCSI subsystem initialized Nov 6 00:34:17.325601 kernel: libata version 3.00 loaded. Nov 6 00:34:17.325803 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:34:17.325845 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:34:17.326058 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:34:17.326233 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:34:17.326456 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:34:17.326696 kernel: scsi host0: ahci Nov 6 00:34:17.327016 kernel: scsi host1: ahci Nov 6 00:34:17.327208 kernel: scsi host2: ahci Nov 6 00:34:17.327421 kernel: scsi host3: ahci Nov 6 00:34:17.327610 kernel: scsi host4: ahci Nov 6 00:34:17.327845 kernel: scsi host5: ahci Nov 6 00:34:17.327874 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 6 00:34:17.327895 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 6 00:34:17.327904 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 6 00:34:17.327913 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 6 00:34:17.327922 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 6 00:34:17.327935 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 6 00:34:17.327944 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:34:17.327953 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:34:17.327962 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:34:17.327971 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:34:17.327979 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:34:17.327989 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:34:17.327999 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:34:17.328008 kernel: ata3.00: applying bridge limits Nov 6 00:34:17.328017 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:34:17.328026 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:34:17.328034 kernel: ata3.00: configured for UDMA/100 Nov 6 00:34:17.328258 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:34:17.328453 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 00:34:17.328632 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 00:34:17.328645 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:34:17.328657 kernel: GPT:16515071 != 27000831 Nov 6 00:34:17.328666 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:34:17.328674 kernel: GPT:16515071 != 27000831 Nov 6 00:34:17.328683 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:34:17.328695 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:34:17.328938 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:34:17.328954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:34:17.329157 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:34:17.329170 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:34:17.329179 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:34:17.329188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:34:17.329201 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 00:34:17.329213 kernel: raid6: avx2x4 gen() 28842 MB/s Nov 6 00:34:17.329221 kernel: raid6: avx2x2 gen() 28354 MB/s Nov 6 00:34:17.329230 kernel: raid6: avx2x1 gen() 23494 MB/s Nov 6 00:34:17.329239 kernel: raid6: using algorithm avx2x4 gen() 28842 MB/s Nov 6 00:34:17.329260 kernel: raid6: .... xor() 7455 MB/s, rmw enabled Nov 6 00:34:17.329270 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:34:17.329279 kernel: xor: automatically using best checksumming function avx Nov 6 00:34:17.329288 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:34:17.329297 kernel: BTRFS: device fsid 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (181) Nov 6 00:34:17.329306 kernel: BTRFS info (device dm-0): first mount of filesystem 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 Nov 6 00:34:17.329315 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:34:17.329326 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:34:17.329335 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:34:17.329343 kernel: loop: module loaded Nov 6 00:34:17.329352 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 00:34:17.329361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:34:17.329371 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:34:17.329385 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:34:17.329395 systemd[1]: Detected virtualization kvm. Nov 6 00:34:17.329404 systemd[1]: Detected architecture x86-64. Nov 6 00:34:17.329413 systemd[1]: Running in initrd. Nov 6 00:34:17.329422 systemd[1]: No hostname configured, using default hostname. Nov 6 00:34:17.329432 systemd[1]: Hostname set to . Nov 6 00:34:17.329444 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:34:17.329453 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:34:17.329462 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:34:17.329472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:34:17.329481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:34:17.329492 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:34:17.329501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:34:17.329513 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:34:17.329523 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:34:17.329533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:34:17.329542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:34:17.329551 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:34:17.329563 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:34:17.329572 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:34:17.329581 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:34:17.329591 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:34:17.329600 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:34:17.329609 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:34:17.329618 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:34:17.329630 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:34:17.329639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:34:17.329648 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:34:17.329658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:34:17.329667 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:34:17.329677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:34:17.329686 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:34:17.329699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:34:17.329708 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:34:17.329718 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:34:17.329729 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:34:17.329741 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:34:17.329752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:34:17.329766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:34:17.329782 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:34:17.329793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:34:17.329805 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:34:17.329819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:34:17.329938 systemd-journald[315]: Collecting audit messages is disabled. Nov 6 00:34:17.329962 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:34:17.329974 kernel: Bridge firewalling registered Nov 6 00:34:17.329983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:34:17.329993 systemd-journald[315]: Journal started Nov 6 00:34:17.330013 systemd-journald[315]: Runtime Journal (/run/log/journal/dee6121de0fe40e3b41d0697ab4cc09b) is 6M, max 48.3M, 42.2M free. Nov 6 00:34:17.327920 systemd-modules-load[317]: Inserted module 'br_netfilter' Nov 6 00:34:17.333890 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:34:17.341181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:34:17.343982 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:34:17.349581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:34:17.360013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:34:17.368060 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:34:17.434141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:34:17.438031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:34:17.441691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:34:17.446061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:34:17.450917 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:34:17.466178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:34:17.485942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:34:17.491960 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:34:17.522709 systemd-resolved[347]: Positive Trust Anchors: Nov 6 00:34:17.522727 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:34:17.527646 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:34:17.522732 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:34:17.522776 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:34:17.558330 systemd-resolved[347]: Defaulting to hostname 'linux'. Nov 6 00:34:17.559631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:34:17.561729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:34:17.666896 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:34:17.682890 kernel: iscsi: registered transport (tcp) Nov 6 00:34:17.709096 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:34:17.709185 kernel: QLogic iSCSI HBA Driver Nov 6 00:34:17.740256 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:34:17.772124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:34:17.774822 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:34:17.840458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:34:17.842679 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:34:17.847001 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:34:17.892527 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:34:17.895052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:34:17.941401 systemd-udevd[597]: Using default interface naming scheme 'v257'. Nov 6 00:34:17.960572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:34:17.962026 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:34:17.981262 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:34:17.988153 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:34:17.999807 dracut-pre-trigger[679]: rd.md=0: removing MD RAID activation Nov 6 00:34:18.037638 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:34:18.044027 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:34:18.062386 systemd-networkd[701]: lo: Link UP Nov 6 00:34:18.062397 systemd-networkd[701]: lo: Gained carrier Nov 6 00:34:18.064041 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:34:18.066182 systemd[1]: Reached target network.target - Network. Nov 6 00:34:18.200262 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:34:18.204289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:34:18.269955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:34:18.285467 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:34:18.304759 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:34:18.336627 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:34:18.336655 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 00:34:18.336670 kernel: AES CTR mode by8 optimization enabled Nov 6 00:34:18.349457 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:34:18.349469 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:34:18.349977 systemd-networkd[701]: eth0: Link UP Nov 6 00:34:18.350822 systemd-networkd[701]: eth0: Gained carrier Nov 6 00:34:18.350834 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:34:18.351301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:34:18.365307 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:34:18.400573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:34:18.400732 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:34:18.401649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:34:18.413895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:34:18.415962 systemd-networkd[701]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:34:18.431658 disk-uuid[833]: Primary Header is updated. Nov 6 00:34:18.431658 disk-uuid[833]: Secondary Entries is updated. Nov 6 00:34:18.431658 disk-uuid[833]: Secondary Header is updated. Nov 6 00:34:18.441982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:34:18.445429 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:34:18.446707 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:34:18.448227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:34:18.453132 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:34:18.573448 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:34:18.591696 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:34:19.477606 disk-uuid[835]: Warning: The kernel is still using the old partition table. Nov 6 00:34:19.477606 disk-uuid[835]: The new table will be used at the next reboot or after you Nov 6 00:34:19.477606 disk-uuid[835]: run partprobe(8) or kpartx(8) Nov 6 00:34:19.477606 disk-uuid[835]: The operation has completed successfully. Nov 6 00:34:19.491667 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:34:19.491831 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:34:19.495783 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:34:19.549236 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Nov 6 00:34:19.549339 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:34:19.549367 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:34:19.556112 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:34:19.556176 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:34:19.567474 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:34:19.573239 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:34:19.575038 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:34:19.824191 ignition[881]: Ignition 2.22.0 Nov 6 00:34:19.824205 ignition[881]: Stage: fetch-offline Nov 6 00:34:19.824259 ignition[881]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:34:19.824271 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:34:19.824374 ignition[881]: parsed url from cmdline: "" Nov 6 00:34:19.824379 ignition[881]: no config URL provided Nov 6 00:34:19.824384 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:34:19.824395 ignition[881]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:34:19.824440 ignition[881]: op(1): [started] loading QEMU firmware config module Nov 6 00:34:19.824446 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 00:34:19.845597 ignition[881]: op(1): [finished] loading QEMU firmware config module Nov 6 00:34:19.930573 ignition[881]: parsing config with SHA512: 958466b494e56384c996cfca5492811ccfa7480adf466db2c6d7cd61fcdf473018d79c9ae2e27347d9491e2de070f472b73caf5a63c2c793450046cd0bb1c5da Nov 6 00:34:19.937008 systemd-networkd[701]: eth0: Gained IPv6LL Nov 6 00:34:19.938306 unknown[881]: fetched base config from "system" Nov 6 00:34:19.938734 ignition[881]: fetch-offline: fetch-offline passed Nov 6 00:34:19.938314 unknown[881]: fetched user config from "qemu" Nov 6 00:34:19.938800 ignition[881]: Ignition finished successfully Nov 6 00:34:19.942379 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:34:19.945362 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:34:19.946373 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:34:19.994961 ignition[891]: Ignition 2.22.0 Nov 6 00:34:19.994977 ignition[891]: Stage: kargs Nov 6 00:34:19.995139 ignition[891]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:34:19.995150 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:34:19.998962 ignition[891]: kargs: kargs passed Nov 6 00:34:19.999017 ignition[891]: Ignition finished successfully Nov 6 00:34:20.005022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:34:20.009626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:34:20.055304 ignition[899]: Ignition 2.22.0 Nov 6 00:34:20.055321 ignition[899]: Stage: disks Nov 6 00:34:20.055509 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:34:20.055524 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:34:20.056506 ignition[899]: disks: disks passed Nov 6 00:34:20.056563 ignition[899]: Ignition finished successfully Nov 6 00:34:20.063057 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:34:20.067553 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:34:20.069739 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:34:20.069841 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:34:20.075826 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:34:20.075937 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:34:20.085284 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:34:20.138617 systemd-fsck[909]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 00:34:20.342919 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:34:20.348042 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:34:20.536921 kernel: EXT4-fs (vda9): mounted filesystem d1cfc077-cc9a-4d2c-97de-8a87792eb8cf r/w with ordered data mode. Quota mode: none. Nov 6 00:34:20.538382 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:34:20.540642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:34:20.545062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:34:20.546335 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:34:20.551019 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:34:20.551093 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:34:20.551143 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:34:20.566561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:34:20.568631 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:34:20.593886 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Nov 6 00:34:20.597941 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:34:20.597974 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:34:20.602066 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:34:20.602163 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:34:20.603704 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:34:20.656331 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:34:20.661261 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:34:20.666722 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:34:20.672725 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:34:20.791777 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:34:20.797731 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:34:20.798755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:34:20.828744 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:34:20.831304 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:34:20.849035 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:34:20.882057 ignition[1031]: INFO : Ignition 2.22.0 Nov 6 00:34:20.882057 ignition[1031]: INFO : Stage: mount Nov 6 00:34:20.885392 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:34:20.885392 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:34:20.885392 ignition[1031]: INFO : mount: mount passed Nov 6 00:34:20.885392 ignition[1031]: INFO : Ignition finished successfully Nov 6 00:34:20.887021 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:34:20.890599 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:34:21.540610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:34:21.633889 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1043) Nov 6 00:34:21.637183 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:34:21.637248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:34:21.640784 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:34:21.640820 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:34:21.642557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:34:21.683527 ignition[1060]: INFO : Ignition 2.22.0 Nov 6 00:34:21.683527 ignition[1060]: INFO : Stage: files Nov 6 00:34:21.686590 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:34:21.686590 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:34:21.686590 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:34:21.686590 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:34:21.686590 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:34:21.701927 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:34:21.704323 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:34:21.707364 unknown[1060]: wrote ssh authorized keys file for user: core Nov 6 00:34:21.709273 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:34:21.711875 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:34:21.711875 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:34:21.766912 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:34:21.845013 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:34:21.845013 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:34:21.852097 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:34:21.912979 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:34:21.924129 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:34:21.924129 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:34:21.965781 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:34:21.965781 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:34:21.973945 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:34:22.274787 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:34:22.726611 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:34:22.726611 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:34:22.732966 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:34:22.835094 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:34:22.835094 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:34:22.835094 ignition[1060]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 6 00:34:22.835094 ignition[1060]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:34:22.847760 ignition[1060]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:34:22.847760 ignition[1060]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 6 00:34:22.847760 ignition[1060]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:34:22.885943 ignition[1060]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:34:22.895301 ignition[1060]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:34:22.898050 ignition[1060]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:34:22.898050 ignition[1060]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:34:22.898050 ignition[1060]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:34:22.898050 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:34:22.898050 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:34:22.898050 ignition[1060]: INFO : files: files passed Nov 6 00:34:22.898050 ignition[1060]: INFO : Ignition finished successfully Nov 6 00:34:22.904251 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:34:22.912758 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:34:22.921561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:34:22.933494 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:34:22.933681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:34:22.942431 initrd-setup-root-after-ignition[1092]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 00:34:22.948821 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:34:22.948821 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:34:22.954314 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:34:22.958945 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:34:22.961380 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:34:22.973306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:34:23.034157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:34:23.049540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:34:23.052178 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:34:23.055592 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:34:23.059597 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:34:23.060774 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:34:23.109053 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:34:23.114562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:34:23.153034 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:34:23.153218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:34:23.155375 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:34:23.159136 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:34:23.162806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:34:23.163034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:34:23.171111 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:34:23.171317 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:34:23.175987 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:34:23.177458 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:34:23.178299 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:34:23.184250 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:34:23.187713 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:34:23.192976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:34:23.194667 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:34:23.198729 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:34:23.203659 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:34:23.206669 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:34:23.206816 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:34:23.213090 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:34:23.215052 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:34:23.218793 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:34:23.219013 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:34:23.220720 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:34:23.220880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:34:23.230132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:34:23.230280 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:34:23.233872 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:34:23.235480 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:34:23.241953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:34:23.242151 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:34:23.246518 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:34:23.247408 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:34:23.247502 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:34:23.252227 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:34:23.252334 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:34:23.252777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:34:23.252936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:34:23.258323 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:34:23.258454 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:34:23.267087 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:34:23.270501 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:34:23.273611 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:34:23.273767 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:34:23.274579 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:34:23.274688 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:34:23.280449 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:34:23.280679 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:34:23.290391 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:34:23.301109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:34:23.335300 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:34:23.346130 ignition[1118]: INFO : Ignition 2.22.0 Nov 6 00:34:23.348287 ignition[1118]: INFO : Stage: umount Nov 6 00:34:23.348287 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:34:23.348287 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:34:23.348287 ignition[1118]: INFO : umount: umount passed Nov 6 00:34:23.348287 ignition[1118]: INFO : Ignition finished successfully Nov 6 00:34:23.351455 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:34:23.351596 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:34:23.353695 systemd[1]: Stopped target network.target - Network. Nov 6 00:34:23.353774 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:34:23.353829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:34:23.354381 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:34:23.354437 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:34:23.355233 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:34:23.355286 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:34:23.363819 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:34:23.363916 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:34:23.367168 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:34:23.368681 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:34:23.388993 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:34:23.389182 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:34:23.397941 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:34:23.398136 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:34:23.405737 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:34:23.407916 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:34:23.407970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:34:23.422561 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:34:23.427717 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:34:23.427838 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:34:23.428583 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:34:23.428655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:34:23.429380 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:34:23.429438 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:34:23.430557 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:34:23.448063 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:34:23.452154 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:34:23.454382 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:34:23.454557 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:34:23.459117 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:34:23.459208 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:34:23.462904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:34:23.462948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:34:23.465198 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:34:23.465260 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:34:23.469668 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:34:23.469725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:34:23.476864 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:34:23.476930 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:34:23.486120 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:34:23.486205 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:34:23.490813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:34:23.491883 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:34:23.491956 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:34:23.492582 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:34:23.492639 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:34:23.498793 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:34:23.498905 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:34:23.504375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:34:23.504433 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:34:23.506128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:34:23.506183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:34:23.524866 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:34:23.525005 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:34:23.564886 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:34:23.565076 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:34:23.570291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:34:23.571511 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:34:23.600407 systemd[1]: Switching root. Nov 6 00:34:23.649525 systemd-journald[315]: Journal stopped Nov 6 00:34:25.534600 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 6 00:34:25.534685 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:34:25.534707 kernel: SELinux: policy capability open_perms=1 Nov 6 00:34:25.534719 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:34:25.534740 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:34:25.534753 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:34:25.534766 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:34:25.534777 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:34:25.534795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:34:25.534807 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:34:25.534819 kernel: audit: type=1403 audit(1762389264.357:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:34:25.534840 systemd[1]: Successfully loaded SELinux policy in 80.412ms. Nov 6 00:34:25.534906 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.168ms. Nov 6 00:34:25.534936 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:34:25.534960 systemd[1]: Detected virtualization kvm. Nov 6 00:34:25.534974 systemd[1]: Detected architecture x86-64. Nov 6 00:34:25.535001 systemd[1]: Detected first boot. Nov 6 00:34:25.535023 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:34:25.535048 zram_generator::config[1164]: No configuration found. Nov 6 00:34:25.535062 kernel: Guest personality initialized and is inactive Nov 6 00:34:25.535074 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:34:25.535086 kernel: Initialized host personality Nov 6 00:34:25.535103 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:34:25.535115 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:34:25.535128 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:34:25.535148 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:34:25.535162 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:34:25.535176 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:34:25.535189 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:34:25.535202 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:34:25.535215 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:34:25.535228 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:34:25.535253 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:34:25.535270 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:34:25.535286 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:34:25.535302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:34:25.535317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:34:25.535339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:34:25.535355 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:34:25.535381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:34:25.535398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:34:25.535414 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:34:25.535430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:34:25.535446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:34:25.535461 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:34:25.535482 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:34:25.535494 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:34:25.535507 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:34:25.535520 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:34:25.535536 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:34:25.535549 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:34:25.535562 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:34:25.535582 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:34:25.535595 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:34:25.535607 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:34:25.535620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:34:25.535633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:34:25.535646 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:34:25.535663 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:34:25.535683 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:34:25.535696 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:34:25.535709 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:34:25.535722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:25.535735 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:34:25.535748 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:34:25.535761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:34:25.535782 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:34:25.535795 systemd[1]: Reached target machines.target - Containers. Nov 6 00:34:25.535808 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:34:25.535821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:34:25.535834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:34:25.535862 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:34:25.535876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:34:25.535898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:34:25.535911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:34:25.535924 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:34:25.535937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:34:25.535950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:34:25.535962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:34:25.535986 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:34:25.535999 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:34:25.536019 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:34:25.536033 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:34:25.536046 kernel: fuse: init (API version 7.41) Nov 6 00:34:25.536058 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:34:25.536071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:34:25.536092 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:34:25.536105 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:34:25.536118 kernel: ACPI: bus type drm_connector registered Nov 6 00:34:25.536130 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:34:25.536144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:34:25.536164 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:25.536177 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:34:25.536190 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:34:25.536204 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:34:25.536238 systemd-journald[1228]: Collecting audit messages is disabled. Nov 6 00:34:25.536271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:34:25.536285 systemd-journald[1228]: Journal started Nov 6 00:34:25.536308 systemd-journald[1228]: Runtime Journal (/run/log/journal/dee6121de0fe40e3b41d0697ab4cc09b) is 6M, max 48.3M, 42.2M free. Nov 6 00:34:24.991748 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:34:25.015847 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:34:25.016570 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:34:25.544881 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:34:25.547898 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:34:25.550054 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:34:25.552284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:34:25.554844 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:34:25.555125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:34:25.557580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:34:25.557805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:34:25.560399 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:34:25.560658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:34:25.563211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:34:25.563627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:34:25.566244 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:34:25.566515 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:34:25.568835 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:34:25.569116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:34:25.571547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:34:25.594105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:34:25.598104 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:34:25.601044 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:34:25.617135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:34:25.627831 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:34:25.630550 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 00:34:25.635002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:34:25.665773 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:34:25.667745 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:34:25.667799 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:34:25.670828 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:34:25.673129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:34:25.674593 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:34:25.677460 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:34:25.679579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:34:25.680699 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:34:25.682779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:34:25.690014 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:34:25.694677 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:34:25.697927 systemd-journald[1228]: Time spent on flushing to /var/log/journal/dee6121de0fe40e3b41d0697ab4cc09b is 15.958ms for 965 entries. Nov 6 00:34:25.697927 systemd-journald[1228]: System Journal (/var/log/journal/dee6121de0fe40e3b41d0697ab4cc09b) is 8M, max 163.5M, 155.5M free. Nov 6 00:34:25.883605 systemd-journald[1228]: Received client request to flush runtime journal. Nov 6 00:34:25.883673 kernel: loop1: detected capacity change from 0 to 229808 Nov 6 00:34:25.883700 kernel: loop2: detected capacity change from 0 to 110976 Nov 6 00:34:25.723932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:34:25.728675 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:34:25.731231 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:34:25.733647 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:34:25.817991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:34:25.852300 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Nov 6 00:34:25.852317 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Nov 6 00:34:25.858577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:34:25.861456 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:34:25.864843 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:34:25.869075 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:34:25.874663 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:34:25.892633 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:34:25.934705 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:34:25.941872 kernel: loop3: detected capacity change from 0 to 128048 Nov 6 00:34:25.947783 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:34:25.952429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:34:25.955286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:34:25.980934 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:34:25.989280 kernel: loop4: detected capacity change from 0 to 229808 Nov 6 00:34:25.994457 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 6 00:34:25.994479 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 6 00:34:25.999886 kernel: loop5: detected capacity change from 0 to 110976 Nov 6 00:34:25.999954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:34:26.010879 kernel: loop6: detected capacity change from 0 to 128048 Nov 6 00:34:26.018458 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:34:26.019690 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 6 00:34:26.023685 (sd-merge)[1308]: Merged extensions into '/usr'. Nov 6 00:34:26.029116 systemd[1]: Reload requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:34:26.029132 systemd[1]: Reloading... Nov 6 00:34:26.105947 zram_generator::config[1342]: No configuration found. Nov 6 00:34:26.121075 systemd-resolved[1303]: Positive Trust Anchors: Nov 6 00:34:26.121098 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:34:26.121103 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:34:26.121148 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:34:26.126008 systemd-resolved[1303]: Defaulting to hostname 'linux'. Nov 6 00:34:26.325123 systemd[1]: Reloading finished in 295 ms. Nov 6 00:34:26.555619 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:34:26.558208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:34:26.560667 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:34:26.566393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:34:26.589832 systemd[1]: Starting ensure-sysext.service... Nov 6 00:34:26.592735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:34:26.762386 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:34:26.762452 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:34:26.762945 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:34:26.763279 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:34:26.764337 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:34:26.764746 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 6 00:34:26.764891 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 6 00:34:26.772298 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:34:26.772315 systemd-tmpfiles[1379]: Skipping /boot Nov 6 00:34:26.772561 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:34:26.772586 systemd[1]: Reloading... Nov 6 00:34:26.786841 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:34:26.786873 systemd-tmpfiles[1379]: Skipping /boot Nov 6 00:34:26.840022 zram_generator::config[1418]: No configuration found. Nov 6 00:34:27.073560 systemd[1]: Reloading finished in 300 ms. Nov 6 00:34:27.097113 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:34:27.128959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:34:27.142354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:34:27.145884 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:34:27.154958 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:34:27.160289 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:34:27.168111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:34:27.174034 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:34:27.180822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:27.181049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:34:27.184174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:34:27.193115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:34:27.199735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:34:27.205026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:34:27.205436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:34:27.205573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:27.208303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:34:27.208635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:34:27.211989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:34:27.212235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:34:27.216703 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:34:27.217093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:34:27.235342 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:34:27.239813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:27.241469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:34:27.242448 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Nov 6 00:34:27.245111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:34:27.251117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:34:27.255272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:34:27.257435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:34:27.257589 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:34:27.257744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:27.262426 augenrules[1483]: No rules Nov 6 00:34:27.263155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:34:27.269937 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:34:27.270336 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:34:27.273231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:34:27.273643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:34:27.280071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:34:27.280420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:34:27.283837 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:34:27.284122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:34:27.287915 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:34:27.300529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:34:27.305658 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:27.308772 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:34:27.310891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:34:27.313608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:34:27.320008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:34:27.331380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:34:27.338565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:34:27.344028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:34:27.344146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:34:27.345678 augenrules[1503]: /sbin/augenrules: No change Nov 6 00:34:27.346999 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:34:27.348892 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:34:27.349013 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:34:27.351104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:34:27.351363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:34:27.354057 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:34:27.354321 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:34:27.361998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:34:27.362339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:34:27.365484 augenrules[1535]: No rules Nov 6 00:34:27.366536 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:34:27.366842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:34:27.370377 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:34:27.370692 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:34:27.378977 systemd[1]: Finished ensure-sysext.service. Nov 6 00:34:27.403626 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:34:27.403887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:34:27.403950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:34:27.407091 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:34:27.490391 systemd-networkd[1529]: lo: Link UP Nov 6 00:34:27.490410 systemd-networkd[1529]: lo: Gained carrier Nov 6 00:34:27.494122 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:34:27.495698 systemd-networkd[1529]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:34:27.495713 systemd-networkd[1529]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:34:27.496627 systemd[1]: Reached target network.target - Network. Nov 6 00:34:27.501436 systemd-networkd[1529]: eth0: Link UP Nov 6 00:34:27.502189 systemd-networkd[1529]: eth0: Gained carrier Nov 6 00:34:27.502222 systemd-networkd[1529]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:34:27.506698 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:34:27.510708 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:34:27.519105 systemd-networkd[1529]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:34:27.553876 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:34:27.567655 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:34:27.572396 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:34:28.773679 systemd-resolved[1303]: Clock change detected. Flushing caches. Nov 6 00:34:28.773987 systemd-timesyncd[1551]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 00:34:28.774566 systemd-timesyncd[1551]: Initial clock synchronization to Thu 2025-11-06 00:34:28.773597 UTC. Nov 6 00:34:28.783942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 6 00:34:28.785422 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:34:28.795411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:34:28.801279 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:34:28.807869 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:34:28.815498 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:34:28.815841 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:34:28.851325 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:34:29.035416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:34:29.144412 kernel: kvm_amd: TSC scaling supported Nov 6 00:34:29.144504 kernel: kvm_amd: Nested Virtualization enabled Nov 6 00:34:29.144529 kernel: kvm_amd: Nested Paging enabled Nov 6 00:34:29.144543 kernel: kvm_amd: LBR virtualization supported Nov 6 00:34:29.144563 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 00:34:29.144576 kernel: kvm_amd: Virtual GIF supported Nov 6 00:34:29.182664 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:34:29.185677 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:34:29.191319 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:34:29.222554 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:34:29.224838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:34:29.245581 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:34:29.248022 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:34:29.250188 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:34:29.252561 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:34:29.255014 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:34:29.257332 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:34:29.259273 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:34:29.261476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:34:29.263822 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:34:29.263872 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:34:29.265501 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:34:29.268476 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:34:29.272738 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:34:29.276988 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:34:29.279297 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:34:29.281383 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:34:29.285492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:34:29.287569 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:34:29.290171 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:34:29.292783 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:34:29.294392 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:34:29.295998 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:34:29.296030 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:34:29.297081 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:34:29.300301 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:34:29.303317 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:34:29.306589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:34:29.309953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:34:29.312737 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:34:29.314662 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:34:29.317596 jq[1599]: false Nov 6 00:34:29.318720 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:34:29.322179 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:34:29.324621 extend-filesystems[1600]: Found /dev/vda6 Nov 6 00:34:29.328762 extend-filesystems[1600]: Found /dev/vda9 Nov 6 00:34:29.329865 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:34:29.330694 extend-filesystems[1600]: Checking size of /dev/vda9 Nov 6 00:34:29.333531 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:34:29.334927 oslogin_cache_refresh[1601]: Refreshing passwd entry cache Nov 6 00:34:29.336688 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing passwd entry cache Nov 6 00:34:29.342825 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:34:29.344952 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:34:29.345736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:34:29.346680 extend-filesystems[1600]: Resized partition /dev/vda9 Nov 6 00:34:29.348004 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:34:29.349985 extend-filesystems[1619]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:34:29.351604 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:34:29.354528 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting users, quitting Nov 6 00:34:29.354522 oslogin_cache_refresh[1601]: Failure getting users, quitting Nov 6 00:34:29.354836 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:34:29.354836 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing group entry cache Nov 6 00:34:29.354547 oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:34:29.354604 oslogin_cache_refresh[1601]: Refreshing group entry cache Nov 6 00:34:29.357669 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 00:34:29.359137 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:34:29.363149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:34:29.363504 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:34:29.363809 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting groups, quitting Nov 6 00:34:29.364094 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:34:29.363809 oslogin_cache_refresh[1601]: Failure getting groups, quitting Nov 6 00:34:29.363826 oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:34:29.364932 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:34:29.365317 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:34:29.369278 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:34:29.371881 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:34:29.380944 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:34:29.381395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:34:29.394265 jq[1622]: true Nov 6 00:34:29.402132 (ntainerd)[1634]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:34:29.630945 jq[1640]: true Nov 6 00:34:29.737946 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 6 00:34:29.752582 update_engine[1617]: I20251106 00:34:29.752485 1617 main.cc:92] Flatcar Update Engine starting Nov 6 00:34:29.766953 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:34:29.766589 dbus-daemon[1597]: [system] SELinux support is enabled Nov 6 00:34:30.480753 update_engine[1617]: I20251106 00:34:29.775450 1617 update_check_scheduler.cc:74] Next update check in 9m14s Nov 6 00:34:29.771076 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:34:29.771105 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:34:29.773216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:34:29.773231 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:34:29.775443 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:34:29.778896 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:34:29.828395 locksmithd[1665]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:34:30.097931 systemd-networkd[1529]: eth0: Gained IPv6LL Nov 6 00:34:30.101593 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:34:30.104008 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:34:30.107658 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 00:34:30.482325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:34:30.482522 systemd-logind[1613]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:34:30.482544 systemd-logind[1613]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:34:30.484192 extend-filesystems[1619]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:34:30.484192 extend-filesystems[1619]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:34:30.484192 extend-filesystems[1619]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 6 00:34:30.498964 extend-filesystems[1600]: Resized filesystem in /dev/vda9 Nov 6 00:34:30.484936 systemd-logind[1613]: New seat seat0. Nov 6 00:34:30.488920 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:34:30.493987 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:34:30.499183 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:34:30.503980 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:34:30.515669 bash[1662]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:34:30.521226 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:34:30.530394 tar[1627]: linux-amd64/LICENSE Nov 6 00:34:30.530394 tar[1627]: linux-amd64/helm Nov 6 00:34:30.531399 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:34:30.534204 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:34:30.540269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:34:30.553315 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:34:30.553702 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 00:34:30.556210 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:34:30.593559 containerd[1634]: time="2025-11-06T00:34:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:34:30.596094 containerd[1634]: time="2025-11-06T00:34:30.596046178Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:34:30.608067 containerd[1634]: time="2025-11-06T00:34:30.607805701Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.263µs" Nov 6 00:34:30.608067 containerd[1634]: time="2025-11-06T00:34:30.607854422Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:34:30.608067 containerd[1634]: time="2025-11-06T00:34:30.607875893Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:34:30.608829 containerd[1634]: time="2025-11-06T00:34:30.608786450Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:34:30.608829 containerd[1634]: time="2025-11-06T00:34:30.608812860Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:34:30.608895 containerd[1634]: time="2025-11-06T00:34:30.608857463Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:34:30.608962 containerd[1634]: time="2025-11-06T00:34:30.608937634Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:34:30.608962 containerd[1634]: time="2025-11-06T00:34:30.608954946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609291 containerd[1634]: time="2025-11-06T00:34:30.609257103Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609291 containerd[1634]: time="2025-11-06T00:34:30.609278493Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609349 containerd[1634]: time="2025-11-06T00:34:30.609298631Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609349 containerd[1634]: time="2025-11-06T00:34:30.609310012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609438 containerd[1634]: time="2025-11-06T00:34:30.609416101Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609748 containerd[1634]: time="2025-11-06T00:34:30.609723167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609798 containerd[1634]: time="2025-11-06T00:34:30.609760918Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:34:30.609798 containerd[1634]: time="2025-11-06T00:34:30.609772700Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:34:30.609934 containerd[1634]: time="2025-11-06T00:34:30.609824767Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:34:30.610178 containerd[1634]: time="2025-11-06T00:34:30.610140449Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:34:30.610246 containerd[1634]: time="2025-11-06T00:34:30.610222323Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:34:30.619397 containerd[1634]: time="2025-11-06T00:34:30.619319564Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:34:30.619445 containerd[1634]: time="2025-11-06T00:34:30.619416105Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:34:30.619445 containerd[1634]: time="2025-11-06T00:34:30.619434760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:34:30.619573 containerd[1634]: time="2025-11-06T00:34:30.619539606Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:34:30.619573 containerd[1634]: time="2025-11-06T00:34:30.619563551Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:34:30.619632 containerd[1634]: time="2025-11-06T00:34:30.619575864Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:34:30.621353 containerd[1634]: time="2025-11-06T00:34:30.621304306Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:34:30.621403 containerd[1634]: time="2025-11-06T00:34:30.621360742Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:34:30.621403 containerd[1634]: time="2025-11-06T00:34:30.621380198Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:34:30.621403 containerd[1634]: time="2025-11-06T00:34:30.621398252Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:34:30.621506 containerd[1634]: time="2025-11-06T00:34:30.621414713Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:34:30.621506 containerd[1634]: time="2025-11-06T00:34:30.621436574Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:34:30.621683 containerd[1634]: time="2025-11-06T00:34:30.621658670Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:34:30.621728 containerd[1634]: time="2025-11-06T00:34:30.621696621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:34:30.621728 containerd[1634]: time="2025-11-06T00:34:30.621719625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:34:30.621792 containerd[1634]: time="2025-11-06T00:34:30.621736877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:34:30.621792 containerd[1634]: time="2025-11-06T00:34:30.621753428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:34:30.621792 containerd[1634]: time="2025-11-06T00:34:30.621766933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:34:30.621792 containerd[1634]: time="2025-11-06T00:34:30.621783064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:34:30.621970 containerd[1634]: time="2025-11-06T00:34:30.621798653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:34:30.621970 containerd[1634]: time="2025-11-06T00:34:30.621815715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:34:30.621970 containerd[1634]: time="2025-11-06T00:34:30.621828138Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:34:30.621970 containerd[1634]: time="2025-11-06T00:34:30.621843076Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:34:30.621970 containerd[1634]: time="2025-11-06T00:34:30.621925461Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:34:30.621970 containerd[1634]: time="2025-11-06T00:34:30.621943054Z" level=info msg="Start snapshots syncer" Nov 6 00:34:30.622122 containerd[1634]: time="2025-11-06T00:34:30.621974312Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:34:30.623333 containerd[1634]: time="2025-11-06T00:34:30.623223715Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:34:30.623559 containerd[1634]: time="2025-11-06T00:34:30.623448397Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626249550Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626403408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626427614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626441029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626453051Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626465956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626477808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626489329Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626513665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626526319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626538051Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626574880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626590690Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:34:30.627097 containerd[1634]: time="2025-11-06T00:34:30.626600979Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626611989Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626621097Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626631706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626670609Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626698031Z" level=info msg="runtime interface created" Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626707759Z" level=info msg="created NRI interface" Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626723288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626737615Z" level=info msg="Connect containerd service" Nov 6 00:34:30.627484 containerd[1634]: time="2025-11-06T00:34:30.626763213Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:34:30.629322 containerd[1634]: time="2025-11-06T00:34:30.628677022Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:34:30.710949 sshd_keygen[1628]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:34:30.743600 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:34:30.750011 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:34:30.758750 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:36582.service - OpenSSH per-connection server daemon (10.0.0.1:36582). Nov 6 00:34:30.773134 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:34:30.773423 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:34:30.779837 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:34:30.806729 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:34:30.814981 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.815937743Z" level=info msg="Start subscribing containerd event" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.816005550Z" level=info msg="Start recovering state" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.816136776Z" level=info msg="Start event monitor" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817046041Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817055609Z" level=info msg="Start streaming server" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817386450Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817398693Z" level=info msg="runtime interface starting up..." Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817405696Z" level=info msg="starting plugins..." Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817422507Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.817516624Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.818461736Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:34:30.824865 containerd[1634]: time="2025-11-06T00:34:30.818720071Z" level=info msg="containerd successfully booted in 0.225804s" Nov 6 00:34:30.819165 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:34:30.821613 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:34:30.824147 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:34:30.862061 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 36582 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:30.864441 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:30.872184 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:34:30.875346 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:34:30.884981 systemd-logind[1613]: New session 1 of user core. Nov 6 00:34:30.901415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:34:30.901818 tar[1627]: linux-amd64/README.md Nov 6 00:34:30.909001 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:34:30.919258 (systemd)[1739]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:34:30.921670 systemd-logind[1613]: New session c1 of user core. Nov 6 00:34:30.923115 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:34:31.095583 systemd[1739]: Queued start job for default target default.target. Nov 6 00:34:31.114138 systemd[1739]: Created slice app.slice - User Application Slice. Nov 6 00:34:31.114168 systemd[1739]: Reached target paths.target - Paths. Nov 6 00:34:31.114214 systemd[1739]: Reached target timers.target - Timers. Nov 6 00:34:31.115909 systemd[1739]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:34:31.128575 systemd[1739]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:34:31.128767 systemd[1739]: Reached target sockets.target - Sockets. Nov 6 00:34:31.128816 systemd[1739]: Reached target basic.target - Basic System. Nov 6 00:34:31.128863 systemd[1739]: Reached target default.target - Main User Target. Nov 6 00:34:31.128904 systemd[1739]: Startup finished in 199ms. Nov 6 00:34:31.129311 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:34:31.139946 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:34:31.212428 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:36596.service - OpenSSH per-connection server daemon (10.0.0.1:36596). Nov 6 00:34:31.274056 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 36596 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:31.275776 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:31.280482 systemd-logind[1613]: New session 2 of user core. Nov 6 00:34:31.288795 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:34:31.345041 sshd[1754]: Connection closed by 10.0.0.1 port 36596 Nov 6 00:34:31.345383 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:31.354796 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:36596.service: Deactivated successfully. Nov 6 00:34:31.356871 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:34:31.357739 systemd-logind[1613]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:34:31.361190 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:36606.service - OpenSSH per-connection server daemon (10.0.0.1:36606). Nov 6 00:34:31.364482 systemd-logind[1613]: Removed session 2. Nov 6 00:34:31.418200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:34:31.420885 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:34:31.422104 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 36606 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:31.423369 systemd[1]: Startup finished in 2.610s (kernel) + 7.440s (initrd) + 5.942s (userspace) = 15.993s. Nov 6 00:34:31.424273 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:31.428953 systemd-logind[1613]: New session 3 of user core. Nov 6 00:34:31.435998 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:34:31.436670 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:34:31.492331 sshd[1769]: Connection closed by 10.0.0.1 port 36606 Nov 6 00:34:31.492749 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:31.498701 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:36606.service: Deactivated successfully. Nov 6 00:34:31.501014 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:34:31.502002 systemd-logind[1613]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:34:31.503774 systemd-logind[1613]: Removed session 3. Nov 6 00:34:31.878430 kubelet[1768]: E1106 00:34:31.878354 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:34:31.882823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:34:31.883052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:34:31.883514 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 266.6M memory peak. Nov 6 00:34:41.511133 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:51944.service - OpenSSH per-connection server daemon (10.0.0.1:51944). Nov 6 00:34:41.564298 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 51944 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:41.566108 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:41.571015 systemd-logind[1613]: New session 4 of user core. Nov 6 00:34:41.580771 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:34:41.635404 sshd[1789]: Connection closed by 10.0.0.1 port 51944 Nov 6 00:34:41.635748 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:41.648197 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:51944.service: Deactivated successfully. Nov 6 00:34:41.650448 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:34:41.651343 systemd-logind[1613]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:34:41.654633 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:51956.service - OpenSSH per-connection server daemon (10.0.0.1:51956). Nov 6 00:34:41.655332 systemd-logind[1613]: Removed session 4. Nov 6 00:34:41.707460 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 51956 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:41.709510 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:41.715365 systemd-logind[1613]: New session 5 of user core. Nov 6 00:34:41.724827 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:34:41.777714 sshd[1798]: Connection closed by 10.0.0.1 port 51956 Nov 6 00:34:41.778239 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:41.801765 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:51956.service: Deactivated successfully. Nov 6 00:34:41.804391 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:34:41.805312 systemd-logind[1613]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:34:41.809146 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:51962.service - OpenSSH per-connection server daemon (10.0.0.1:51962). Nov 6 00:34:41.810934 systemd-logind[1613]: Removed session 5. Nov 6 00:34:41.863349 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 51962 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:41.865233 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:41.870542 systemd-logind[1613]: New session 6 of user core. Nov 6 00:34:41.879758 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:34:41.884199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:34:41.886055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:34:41.936069 sshd[1807]: Connection closed by 10.0.0.1 port 51962 Nov 6 00:34:41.936412 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:41.942851 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:51962.service: Deactivated successfully. Nov 6 00:34:41.945178 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:34:41.946269 systemd-logind[1613]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:34:41.949811 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). Nov 6 00:34:41.950753 systemd-logind[1613]: Removed session 6. Nov 6 00:34:42.019924 sshd[1816]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:42.021923 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:42.027820 systemd-logind[1613]: New session 7 of user core. Nov 6 00:34:42.033855 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:34:42.102443 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:34:42.102884 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:34:42.118142 sudo[1821]: pam_unix(sudo:session): session closed for user root Nov 6 00:34:42.120069 sshd[1820]: Connection closed by 10.0.0.1 port 51972 Nov 6 00:34:42.120618 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:42.127067 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:51972.service: Deactivated successfully. Nov 6 00:34:42.130067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:34:42.130864 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:34:42.133095 systemd-logind[1613]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:34:42.149985 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:34:42.151238 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Nov 6 00:34:42.152347 systemd-logind[1613]: Removed session 7. Nov 6 00:34:42.201781 kubelet[1829]: E1106 00:34:42.201699 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:34:42.207996 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:42.208806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:34:42.209052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:34:42.209540 systemd[1]: kubelet.service: Consumed 250ms CPU time, 111.4M memory peak. Nov 6 00:34:42.209781 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:42.215269 systemd-logind[1613]: New session 8 of user core. Nov 6 00:34:42.235861 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:34:42.292782 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:34:42.293170 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:34:43.124633 sudo[1845]: pam_unix(sudo:session): session closed for user root Nov 6 00:34:43.134438 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:34:43.134872 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:34:43.147150 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:34:43.202553 augenrules[1867]: No rules Nov 6 00:34:43.204399 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:34:43.204700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:34:43.206037 sudo[1844]: pam_unix(sudo:session): session closed for user root Nov 6 00:34:43.208336 sshd[1843]: Connection closed by 10.0.0.1 port 51984 Nov 6 00:34:43.208723 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Nov 6 00:34:43.218503 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:51984.service: Deactivated successfully. Nov 6 00:34:43.220620 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:34:43.221602 systemd-logind[1613]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:34:43.225346 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:51994.service - OpenSSH per-connection server daemon (10.0.0.1:51994). Nov 6 00:34:43.226112 systemd-logind[1613]: Removed session 8. Nov 6 00:34:43.285228 sshd[1876]: Accepted publickey for core from 10.0.0.1 port 51994 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:34:43.287216 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:34:43.292578 systemd-logind[1613]: New session 9 of user core. Nov 6 00:34:43.301820 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:34:43.358742 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:34:43.359060 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:34:43.763766 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:34:43.783986 (dockerd)[1901]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:34:44.068577 dockerd[1901]: time="2025-11-06T00:34:44.068401994Z" level=info msg="Starting up" Nov 6 00:34:44.069686 dockerd[1901]: time="2025-11-06T00:34:44.069623795Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:34:44.083788 dockerd[1901]: time="2025-11-06T00:34:44.083709641Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:34:44.138312 dockerd[1901]: time="2025-11-06T00:34:44.138233433Z" level=info msg="Loading containers: start." Nov 6 00:34:44.149829 kernel: Initializing XFRM netlink socket Nov 6 00:34:45.330290 systemd-networkd[1529]: docker0: Link UP Nov 6 00:34:45.816781 dockerd[1901]: time="2025-11-06T00:34:45.816706318Z" level=info msg="Loading containers: done." Nov 6 00:34:45.830886 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2592489935-merged.mount: Deactivated successfully. Nov 6 00:34:46.073155 dockerd[1901]: time="2025-11-06T00:34:46.073010140Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:34:46.073155 dockerd[1901]: time="2025-11-06T00:34:46.073119555Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:34:46.073338 dockerd[1901]: time="2025-11-06T00:34:46.073240201Z" level=info msg="Initializing buildkit" Nov 6 00:34:46.345364 dockerd[1901]: time="2025-11-06T00:34:46.345203940Z" level=info msg="Completed buildkit initialization" Nov 6 00:34:46.351159 dockerd[1901]: time="2025-11-06T00:34:46.351103294Z" level=info msg="Daemon has completed initialization" Nov 6 00:34:46.351305 dockerd[1901]: time="2025-11-06T00:34:46.351207379Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:34:46.351462 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:34:47.135223 containerd[1634]: time="2025-11-06T00:34:47.135155141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:34:48.118626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988735751.mount: Deactivated successfully. Nov 6 00:34:49.752753 containerd[1634]: time="2025-11-06T00:34:49.752665898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:49.753726 containerd[1634]: time="2025-11-06T00:34:49.753631248Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 6 00:34:49.755498 containerd[1634]: time="2025-11-06T00:34:49.755440932Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:49.760588 containerd[1634]: time="2025-11-06T00:34:49.760526750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:49.761400 containerd[1634]: time="2025-11-06T00:34:49.761342119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.626136814s" Nov 6 00:34:49.761400 containerd[1634]: time="2025-11-06T00:34:49.761386502Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:34:49.762010 containerd[1634]: time="2025-11-06T00:34:49.761976228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:34:50.924800 containerd[1634]: time="2025-11-06T00:34:50.924705300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:50.925558 containerd[1634]: time="2025-11-06T00:34:50.925514878Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 6 00:34:50.926965 containerd[1634]: time="2025-11-06T00:34:50.926920474Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:50.929958 containerd[1634]: time="2025-11-06T00:34:50.929882118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:50.931340 containerd[1634]: time="2025-11-06T00:34:50.931301791Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.169295918s" Nov 6 00:34:50.931340 containerd[1634]: time="2025-11-06T00:34:50.931333059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:34:50.931985 containerd[1634]: time="2025-11-06T00:34:50.931941681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:34:52.423427 containerd[1634]: time="2025-11-06T00:34:52.423297976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:52.424215 containerd[1634]: time="2025-11-06T00:34:52.424142410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 6 00:34:52.425510 containerd[1634]: time="2025-11-06T00:34:52.425446165Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:52.428592 containerd[1634]: time="2025-11-06T00:34:52.428561687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:52.429584 containerd[1634]: time="2025-11-06T00:34:52.429531216Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.497562154s" Nov 6 00:34:52.429584 containerd[1634]: time="2025-11-06T00:34:52.429560871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:34:52.430403 containerd[1634]: time="2025-11-06T00:34:52.430369247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:34:52.454678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:34:52.457303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:34:52.696041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:34:52.700598 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:34:52.739527 kubelet[2194]: E1106 00:34:52.739431 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:34:52.744157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:34:52.744413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:34:52.744919 systemd[1]: kubelet.service: Consumed 250ms CPU time, 114.7M memory peak. Nov 6 00:34:53.742714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140049056.mount: Deactivated successfully. Nov 6 00:34:54.505212 containerd[1634]: time="2025-11-06T00:34:54.505117511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:54.505930 containerd[1634]: time="2025-11-06T00:34:54.505894768Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 6 00:34:54.507020 containerd[1634]: time="2025-11-06T00:34:54.506974263Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:54.509141 containerd[1634]: time="2025-11-06T00:34:54.509093798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:54.509876 containerd[1634]: time="2025-11-06T00:34:54.509826432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.079421578s" Nov 6 00:34:54.509916 containerd[1634]: time="2025-11-06T00:34:54.509877538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:34:54.510454 containerd[1634]: time="2025-11-06T00:34:54.510396120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:34:55.127856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857268139.mount: Deactivated successfully. Nov 6 00:34:56.305908 containerd[1634]: time="2025-11-06T00:34:56.305795194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:56.306811 containerd[1634]: time="2025-11-06T00:34:56.306659034Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 6 00:34:56.308141 containerd[1634]: time="2025-11-06T00:34:56.308067105Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:56.310899 containerd[1634]: time="2025-11-06T00:34:56.310836719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:56.312042 containerd[1634]: time="2025-11-06T00:34:56.311987747Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.801554918s" Nov 6 00:34:56.312042 containerd[1634]: time="2025-11-06T00:34:56.312023274Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:34:56.312772 containerd[1634]: time="2025-11-06T00:34:56.312543059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:34:56.859371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775256979.mount: Deactivated successfully. Nov 6 00:34:56.864583 containerd[1634]: time="2025-11-06T00:34:56.864534891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:34:56.865358 containerd[1634]: time="2025-11-06T00:34:56.865314923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:34:56.866479 containerd[1634]: time="2025-11-06T00:34:56.866429263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:34:56.868550 containerd[1634]: time="2025-11-06T00:34:56.868496951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:34:56.869153 containerd[1634]: time="2025-11-06T00:34:56.869113868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 556.520455ms" Nov 6 00:34:56.869153 containerd[1634]: time="2025-11-06T00:34:56.869141870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:34:56.869595 containerd[1634]: time="2025-11-06T00:34:56.869574131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:34:57.493966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556611467.mount: Deactivated successfully. Nov 6 00:34:59.091829 containerd[1634]: time="2025-11-06T00:34:59.091757238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:59.092980 containerd[1634]: time="2025-11-06T00:34:59.092942470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 6 00:34:59.094215 containerd[1634]: time="2025-11-06T00:34:59.094179320Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:59.097317 containerd[1634]: time="2025-11-06T00:34:59.097272551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:34:59.098325 containerd[1634]: time="2025-11-06T00:34:59.098287644Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.228688937s" Nov 6 00:34:59.098369 containerd[1634]: time="2025-11-06T00:34:59.098326127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:35:02.840431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:35:02.842461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:35:02.857598 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:35:02.857753 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:35:02.858117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:35:02.861100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:35:02.888875 systemd[1]: Reload requested from client PID 2357 ('systemctl') (unit session-9.scope)... Nov 6 00:35:02.888902 systemd[1]: Reloading... Nov 6 00:35:02.985682 zram_generator::config[2403]: No configuration found. Nov 6 00:35:03.697553 systemd[1]: Reloading finished in 808 ms. Nov 6 00:35:03.766323 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:35:03.766423 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:35:03.766752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:35:03.766803 systemd[1]: kubelet.service: Consumed 169ms CPU time, 98.2M memory peak. Nov 6 00:35:03.768392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:35:03.956157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:35:03.970079 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:35:04.008164 kubelet[2448]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:35:04.008164 kubelet[2448]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:35:04.008164 kubelet[2448]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:35:04.008597 kubelet[2448]: I1106 00:35:04.008209 2448 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:35:04.686496 kubelet[2448]: I1106 00:35:04.686435 2448 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:35:04.686496 kubelet[2448]: I1106 00:35:04.686469 2448 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:35:04.686772 kubelet[2448]: I1106 00:35:04.686739 2448 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:35:04.720834 kubelet[2448]: E1106 00:35:04.720763 2448 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:35:04.721402 kubelet[2448]: I1106 00:35:04.721339 2448 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:35:04.729225 kubelet[2448]: I1106 00:35:04.729188 2448 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:35:04.735022 kubelet[2448]: I1106 00:35:04.734977 2448 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:35:04.735304 kubelet[2448]: I1106 00:35:04.735269 2448 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:35:04.735448 kubelet[2448]: I1106 00:35:04.735297 2448 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:35:04.735611 kubelet[2448]: I1106 00:35:04.735454 2448 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:35:04.735611 kubelet[2448]: I1106 00:35:04.735462 2448 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:35:04.736296 kubelet[2448]: I1106 00:35:04.736267 2448 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:35:04.738614 kubelet[2448]: I1106 00:35:04.738561 2448 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:35:04.738614 kubelet[2448]: I1106 00:35:04.738591 2448 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:35:04.738614 kubelet[2448]: I1106 00:35:04.738620 2448 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:35:04.738819 kubelet[2448]: I1106 00:35:04.738675 2448 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:35:04.743722 kubelet[2448]: E1106 00:35:04.743677 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:35:04.744004 kubelet[2448]: E1106 00:35:04.743968 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:35:04.746458 kubelet[2448]: I1106 00:35:04.746423 2448 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:35:04.747018 kubelet[2448]: I1106 00:35:04.746996 2448 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:35:04.748132 kubelet[2448]: W1106 00:35:04.748087 2448 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:35:04.751921 kubelet[2448]: I1106 00:35:04.751897 2448 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:35:04.751996 kubelet[2448]: I1106 00:35:04.751978 2448 server.go:1289] "Started kubelet" Nov 6 00:35:04.753436 kubelet[2448]: I1106 00:35:04.753343 2448 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:35:04.754358 kubelet[2448]: I1106 00:35:04.754327 2448 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:35:04.755658 kubelet[2448]: I1106 00:35:04.754434 2448 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:35:04.755658 kubelet[2448]: I1106 00:35:04.755090 2448 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:35:04.755658 kubelet[2448]: I1106 00:35:04.755216 2448 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:35:04.756032 kubelet[2448]: I1106 00:35:04.756008 2448 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:35:04.759468 kubelet[2448]: E1106 00:35:04.758614 2448 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:35:04.759468 kubelet[2448]: I1106 00:35:04.758655 2448 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:35:04.759468 kubelet[2448]: I1106 00:35:04.758853 2448 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:35:04.759468 kubelet[2448]: I1106 00:35:04.758901 2448 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:35:04.759468 kubelet[2448]: E1106 00:35:04.759240 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:35:04.759667 kubelet[2448]: I1106 00:35:04.759542 2448 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:35:04.759667 kubelet[2448]: I1106 00:35:04.759611 2448 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:35:04.761272 kubelet[2448]: E1106 00:35:04.761244 2448 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:35:04.762436 kubelet[2448]: E1106 00:35:04.762401 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Nov 6 00:35:04.762788 kubelet[2448]: I1106 00:35:04.762769 2448 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:35:04.766158 kubelet[2448]: E1106 00:35:04.763006 2448 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187543bc31af9bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:35:04.751922136 +0000 UTC m=+0.777482561,LastTimestamp:2025-11-06 00:35:04.751922136 +0000 UTC m=+0.777482561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:35:04.776155 kubelet[2448]: I1106 00:35:04.776109 2448 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:35:04.776155 kubelet[2448]: I1106 00:35:04.776144 2448 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:35:04.776155 kubelet[2448]: I1106 00:35:04.776166 2448 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:35:04.782556 kubelet[2448]: I1106 00:35:04.782522 2448 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:35:04.784784 kubelet[2448]: I1106 00:35:04.784765 2448 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:35:04.784784 kubelet[2448]: I1106 00:35:04.784783 2448 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:35:04.784850 kubelet[2448]: I1106 00:35:04.784804 2448 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:35:04.784850 kubelet[2448]: I1106 00:35:04.784811 2448 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:35:04.785487 kubelet[2448]: E1106 00:35:04.784848 2448 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:35:04.785487 kubelet[2448]: E1106 00:35:04.785459 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:35:04.859544 kubelet[2448]: E1106 00:35:04.859487 2448 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:35:04.885843 kubelet[2448]: E1106 00:35:04.885802 2448 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:35:04.960124 kubelet[2448]: E1106 00:35:04.960078 2448 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:35:04.963735 kubelet[2448]: E1106 00:35:04.963706 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Nov 6 00:35:05.060535 kubelet[2448]: E1106 00:35:05.060494 2448 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:35:05.086849 kubelet[2448]: E1106 00:35:05.086806 2448 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:35:05.161217 kubelet[2448]: E1106 00:35:05.161175 2448 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:35:05.201996 kubelet[2448]: I1106 00:35:05.201942 2448 policy_none.go:49] "None policy: Start" Nov 6 00:35:05.201996 kubelet[2448]: I1106 00:35:05.201988 2448 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:35:05.202129 kubelet[2448]: I1106 00:35:05.202007 2448 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:35:05.209469 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:35:05.223435 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:35:05.227283 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:35:05.236843 kubelet[2448]: E1106 00:35:05.236795 2448 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:35:05.237058 kubelet[2448]: I1106 00:35:05.237036 2448 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:35:05.237101 kubelet[2448]: I1106 00:35:05.237049 2448 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:35:05.237306 kubelet[2448]: I1106 00:35:05.237287 2448 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:35:05.237999 kubelet[2448]: E1106 00:35:05.237971 2448 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:35:05.238050 kubelet[2448]: E1106 00:35:05.238013 2448 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:35:05.339253 kubelet[2448]: I1106 00:35:05.339201 2448 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:35:05.339603 kubelet[2448]: E1106 00:35:05.339578 2448 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 6 00:35:05.364566 kubelet[2448]: E1106 00:35:05.364503 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Nov 6 00:35:05.501214 systemd[1]: Created slice kubepods-burstable-pod8a997cbfbfe51acb4a9e341bf78fc274.slice - libcontainer container kubepods-burstable-pod8a997cbfbfe51acb4a9e341bf78fc274.slice. Nov 6 00:35:05.512948 kubelet[2448]: E1106 00:35:05.512869 2448 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:35:05.516142 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 6 00:35:05.528281 kubelet[2448]: E1106 00:35:05.528237 2448 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:35:05.531432 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 6 00:35:05.533377 kubelet[2448]: E1106 00:35:05.533343 2448 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:35:05.541418 kubelet[2448]: I1106 00:35:05.541374 2448 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:35:05.541752 kubelet[2448]: E1106 00:35:05.541717 2448 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 6 00:35:05.562304 kubelet[2448]: I1106 00:35:05.562257 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:05.562359 kubelet[2448]: I1106 00:35:05.562312 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a997cbfbfe51acb4a9e341bf78fc274-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a997cbfbfe51acb4a9e341bf78fc274\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:05.562422 kubelet[2448]: I1106 00:35:05.562343 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a997cbfbfe51acb4a9e341bf78fc274-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a997cbfbfe51acb4a9e341bf78fc274\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:05.562519 kubelet[2448]: I1106 00:35:05.562429 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:05.562519 kubelet[2448]: I1106 00:35:05.562454 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:05.562519 kubelet[2448]: I1106 00:35:05.562473 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:05.562585 kubelet[2448]: I1106 00:35:05.562513 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:05.562585 kubelet[2448]: I1106 00:35:05.562564 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:05.562660 kubelet[2448]: I1106 00:35:05.562584 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a997cbfbfe51acb4a9e341bf78fc274-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a997cbfbfe51acb4a9e341bf78fc274\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:05.706666 kubelet[2448]: E1106 00:35:05.706583 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:35:05.799300 kubelet[2448]: E1106 00:35:05.799170 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:35:05.813692 kubelet[2448]: E1106 00:35:05.813656 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:05.814383 containerd[1634]: time="2025-11-06T00:35:05.814344473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a997cbfbfe51acb4a9e341bf78fc274,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:05.829754 kubelet[2448]: E1106 00:35:05.829700 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:05.830281 containerd[1634]: time="2025-11-06T00:35:05.830225141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:05.834746 kubelet[2448]: E1106 00:35:05.834498 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:05.835014 containerd[1634]: time="2025-11-06T00:35:05.834975360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:05.837675 containerd[1634]: time="2025-11-06T00:35:05.837497151Z" level=info msg="connecting to shim b1702b63845321a4c75d84dc4be536c94ebabe19bddf6b308cb9a5475b7d571d" address="unix:///run/containerd/s/52c122ead8b2e4d63edcdf2516448722783f6f3be8f37ea92ca06e7fcc300b76" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:05.868811 containerd[1634]: time="2025-11-06T00:35:05.868761431Z" level=info msg="connecting to shim d2b2813fee0d445b30e89adf5d84eb5d57a7eb5f02f5b4ccb826534b8029d726" address="unix:///run/containerd/s/49122fa9efed735931689b0913e4a50adad1550b56d4bb8284b63482cb6fde29" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:05.869534 containerd[1634]: time="2025-11-06T00:35:05.869498260Z" level=info msg="connecting to shim 56abebebfde907ffa8596e6d50882fb2d0bdef9069626c5928270557de3803f3" address="unix:///run/containerd/s/98a65d3285bb8edf569a4b126aa9230eeffbb238ad13016e861dbab83c0b6f01" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:05.879945 systemd[1]: Started cri-containerd-b1702b63845321a4c75d84dc4be536c94ebabe19bddf6b308cb9a5475b7d571d.scope - libcontainer container b1702b63845321a4c75d84dc4be536c94ebabe19bddf6b308cb9a5475b7d571d. Nov 6 00:35:05.904833 systemd[1]: Started cri-containerd-d2b2813fee0d445b30e89adf5d84eb5d57a7eb5f02f5b4ccb826534b8029d726.scope - libcontainer container d2b2813fee0d445b30e89adf5d84eb5d57a7eb5f02f5b4ccb826534b8029d726. Nov 6 00:35:05.910077 systemd[1]: Started cri-containerd-56abebebfde907ffa8596e6d50882fb2d0bdef9069626c5928270557de3803f3.scope - libcontainer container 56abebebfde907ffa8596e6d50882fb2d0bdef9069626c5928270557de3803f3. Nov 6 00:35:05.943565 kubelet[2448]: I1106 00:35:05.943466 2448 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:35:05.943921 kubelet[2448]: E1106 00:35:05.943887 2448 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 6 00:35:05.954526 containerd[1634]: time="2025-11-06T00:35:05.954463390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a997cbfbfe51acb4a9e341bf78fc274,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1702b63845321a4c75d84dc4be536c94ebabe19bddf6b308cb9a5475b7d571d\"" Nov 6 00:35:05.955782 kubelet[2448]: E1106 00:35:05.955734 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:05.962074 containerd[1634]: time="2025-11-06T00:35:05.962011300Z" level=info msg="CreateContainer within sandbox \"b1702b63845321a4c75d84dc4be536c94ebabe19bddf6b308cb9a5475b7d571d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:35:05.970951 containerd[1634]: time="2025-11-06T00:35:05.970890317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"56abebebfde907ffa8596e6d50882fb2d0bdef9069626c5928270557de3803f3\"" Nov 6 00:35:05.972229 kubelet[2448]: E1106 00:35:05.972113 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:05.973023 containerd[1634]: time="2025-11-06T00:35:05.972867411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2b2813fee0d445b30e89adf5d84eb5d57a7eb5f02f5b4ccb826534b8029d726\"" Nov 6 00:35:05.973079 kubelet[2448]: E1106 00:35:05.972968 2448 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:35:05.973625 kubelet[2448]: E1106 00:35:05.973578 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:05.977871 containerd[1634]: time="2025-11-06T00:35:05.977840481Z" level=info msg="CreateContainer within sandbox \"56abebebfde907ffa8596e6d50882fb2d0bdef9069626c5928270557de3803f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:35:05.982137 containerd[1634]: time="2025-11-06T00:35:05.981189160Z" level=info msg="CreateContainer within sandbox \"d2b2813fee0d445b30e89adf5d84eb5d57a7eb5f02f5b4ccb826534b8029d726\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:35:05.982209 containerd[1634]: time="2025-11-06T00:35:05.982155750Z" level=info msg="Container e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:05.993588 containerd[1634]: time="2025-11-06T00:35:05.993545339Z" level=info msg="Container 811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:05.995590 containerd[1634]: time="2025-11-06T00:35:05.995550536Z" level=info msg="Container 91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:05.998431 containerd[1634]: time="2025-11-06T00:35:05.998383192Z" level=info msg="CreateContainer within sandbox \"b1702b63845321a4c75d84dc4be536c94ebabe19bddf6b308cb9a5475b7d571d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87\"" Nov 6 00:35:05.999180 containerd[1634]: time="2025-11-06T00:35:05.999136482Z" level=info msg="StartContainer for \"e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87\"" Nov 6 00:35:06.000326 containerd[1634]: time="2025-11-06T00:35:06.000300845Z" level=info msg="connecting to shim e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87" address="unix:///run/containerd/s/52c122ead8b2e4d63edcdf2516448722783f6f3be8f37ea92ca06e7fcc300b76" protocol=ttrpc version=3 Nov 6 00:35:06.002235 containerd[1634]: time="2025-11-06T00:35:06.002186818Z" level=info msg="CreateContainer within sandbox \"56abebebfde907ffa8596e6d50882fb2d0bdef9069626c5928270557de3803f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a\"" Nov 6 00:35:06.002904 containerd[1634]: time="2025-11-06T00:35:06.002874483Z" level=info msg="StartContainer for \"811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a\"" Nov 6 00:35:06.004197 containerd[1634]: time="2025-11-06T00:35:06.004154984Z" level=info msg="connecting to shim 811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a" address="unix:///run/containerd/s/98a65d3285bb8edf569a4b126aa9230eeffbb238ad13016e861dbab83c0b6f01" protocol=ttrpc version=3 Nov 6 00:35:06.005147 containerd[1634]: time="2025-11-06T00:35:06.005080378Z" level=info msg="CreateContainer within sandbox \"d2b2813fee0d445b30e89adf5d84eb5d57a7eb5f02f5b4ccb826534b8029d726\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0\"" Nov 6 00:35:06.005906 containerd[1634]: time="2025-11-06T00:35:06.005668916Z" level=info msg="StartContainer for \"91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0\"" Nov 6 00:35:06.007077 containerd[1634]: time="2025-11-06T00:35:06.007045088Z" level=info msg="connecting to shim 91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0" address="unix:///run/containerd/s/49122fa9efed735931689b0913e4a50adad1550b56d4bb8284b63482cb6fde29" protocol=ttrpc version=3 Nov 6 00:35:06.028854 systemd[1]: Started cri-containerd-e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87.scope - libcontainer container e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87. Nov 6 00:35:06.033365 systemd[1]: Started cri-containerd-811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a.scope - libcontainer container 811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a. Nov 6 00:35:06.035982 systemd[1]: Started cri-containerd-91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0.scope - libcontainer container 91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0. Nov 6 00:35:06.089749 containerd[1634]: time="2025-11-06T00:35:06.088927867Z" level=info msg="StartContainer for \"e1cdbad9743a012c6c9eed9d6648414102d4b570c305a718af9763f292c61b87\" returns successfully" Nov 6 00:35:06.096373 containerd[1634]: time="2025-11-06T00:35:06.096308287Z" level=info msg="StartContainer for \"811a543d1ff891a09a4246ca2fe6aee0c61b71ff9107e45ec8589cb20a98c65a\" returns successfully" Nov 6 00:35:06.104820 containerd[1634]: time="2025-11-06T00:35:06.104768763Z" level=info msg="StartContainer for \"91cf617cb227e2a3d9b4dabaa5e697842adfc4f1f0abe814b264ffddb3db12b0\" returns successfully" Nov 6 00:35:06.749677 kubelet[2448]: I1106 00:35:06.748824 2448 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:35:06.794778 kubelet[2448]: E1106 00:35:06.794736 2448 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:35:06.795592 kubelet[2448]: E1106 00:35:06.794910 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:06.798104 kubelet[2448]: E1106 00:35:06.798081 2448 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:35:06.798200 kubelet[2448]: E1106 00:35:06.798181 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:06.798413 kubelet[2448]: E1106 00:35:06.798394 2448 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:35:06.798503 kubelet[2448]: E1106 00:35:06.798487 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:07.242443 kubelet[2448]: E1106 00:35:07.242386 2448 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 00:35:07.318213 kubelet[2448]: I1106 00:35:07.318164 2448 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:35:07.362380 kubelet[2448]: I1106 00:35:07.362319 2448 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:07.419417 kubelet[2448]: E1106 00:35:07.419194 2448 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:07.419417 kubelet[2448]: I1106 00:35:07.419224 2448 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:07.420734 kubelet[2448]: E1106 00:35:07.420717 2448 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:07.421345 kubelet[2448]: I1106 00:35:07.420802 2448 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:07.422229 kubelet[2448]: E1106 00:35:07.422196 2448 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:07.743271 kubelet[2448]: I1106 00:35:07.743217 2448 apiserver.go:52] "Watching apiserver" Nov 6 00:35:07.759953 kubelet[2448]: I1106 00:35:07.759927 2448 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:35:07.798774 kubelet[2448]: I1106 00:35:07.798743 2448 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:07.798929 kubelet[2448]: I1106 00:35:07.798821 2448 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:07.800250 kubelet[2448]: E1106 00:35:07.800223 2448 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:07.800386 kubelet[2448]: E1106 00:35:07.800368 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:07.800491 kubelet[2448]: E1106 00:35:07.800473 2448 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:07.800567 kubelet[2448]: E1106 00:35:07.800548 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:08.799934 kubelet[2448]: I1106 00:35:08.799889 2448 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:08.805306 kubelet[2448]: E1106 00:35:08.805269 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:09.409420 systemd[1]: Reload requested from client PID 2735 ('systemctl') (unit session-9.scope)... Nov 6 00:35:09.409438 systemd[1]: Reloading... Nov 6 00:35:09.489744 zram_generator::config[2780]: No configuration found. Nov 6 00:35:09.737211 systemd[1]: Reloading finished in 327 ms. Nov 6 00:35:09.772907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:35:09.799383 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:35:09.800140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:35:09.800279 systemd[1]: kubelet.service: Consumed 1.104s CPU time, 130.1M memory peak. Nov 6 00:35:09.803725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:35:10.071892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:35:10.088078 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:35:10.136930 kubelet[2824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:35:10.136930 kubelet[2824]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:35:10.136930 kubelet[2824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:35:10.137420 kubelet[2824]: I1106 00:35:10.136972 2824 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:35:10.145615 kubelet[2824]: I1106 00:35:10.145561 2824 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:35:10.145615 kubelet[2824]: I1106 00:35:10.145594 2824 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:35:10.145862 kubelet[2824]: I1106 00:35:10.145838 2824 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:35:10.147261 kubelet[2824]: I1106 00:35:10.147228 2824 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:35:10.151866 kubelet[2824]: I1106 00:35:10.151806 2824 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:35:10.156830 kubelet[2824]: I1106 00:35:10.156789 2824 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:35:10.162545 kubelet[2824]: I1106 00:35:10.162510 2824 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:35:10.162817 kubelet[2824]: I1106 00:35:10.162774 2824 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:35:10.162971 kubelet[2824]: I1106 00:35:10.162801 2824 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:35:10.163089 kubelet[2824]: I1106 00:35:10.162973 2824 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:35:10.163089 kubelet[2824]: I1106 00:35:10.162983 2824 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:35:10.163089 kubelet[2824]: I1106 00:35:10.163036 2824 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:35:10.163254 kubelet[2824]: I1106 00:35:10.163232 2824 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:35:10.163254 kubelet[2824]: I1106 00:35:10.163249 2824 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:35:10.163297 kubelet[2824]: I1106 00:35:10.163269 2824 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:35:10.163297 kubelet[2824]: I1106 00:35:10.163286 2824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:35:10.164185 kubelet[2824]: I1106 00:35:10.164153 2824 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:35:10.164668 kubelet[2824]: I1106 00:35:10.164623 2824 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:35:10.169276 kubelet[2824]: I1106 00:35:10.169251 2824 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:35:10.169354 kubelet[2824]: I1106 00:35:10.169293 2824 server.go:1289] "Started kubelet" Nov 6 00:35:10.169783 kubelet[2824]: I1106 00:35:10.169715 2824 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:35:10.169918 kubelet[2824]: I1106 00:35:10.169845 2824 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:35:10.170668 kubelet[2824]: I1106 00:35:10.170372 2824 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:35:10.170732 kubelet[2824]: I1106 00:35:10.170693 2824 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:35:10.174441 kubelet[2824]: I1106 00:35:10.174420 2824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:35:10.176667 kubelet[2824]: I1106 00:35:10.176407 2824 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:35:10.178688 kubelet[2824]: I1106 00:35:10.178599 2824 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:35:10.178801 kubelet[2824]: I1106 00:35:10.178781 2824 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:35:10.178906 kubelet[2824]: I1106 00:35:10.178890 2824 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:35:10.180180 kubelet[2824]: I1106 00:35:10.180116 2824 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:35:10.182373 kubelet[2824]: I1106 00:35:10.182256 2824 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:35:10.182373 kubelet[2824]: I1106 00:35:10.182277 2824 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:35:10.186259 kubelet[2824]: E1106 00:35:10.186234 2824 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:35:10.398672 kubelet[2824]: I1106 00:35:10.398015 2824 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:35:10.399589 kubelet[2824]: I1106 00:35:10.399563 2824 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:35:10.399792 kubelet[2824]: I1106 00:35:10.399743 2824 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:35:10.399792 kubelet[2824]: I1106 00:35:10.399779 2824 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:35:10.399792 kubelet[2824]: I1106 00:35:10.399796 2824 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:35:10.400062 kubelet[2824]: E1106 00:35:10.399849 2824 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:35:10.424443 kubelet[2824]: I1106 00:35:10.424411 2824 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:35:10.424443 kubelet[2824]: I1106 00:35:10.424429 2824 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:35:10.424443 kubelet[2824]: I1106 00:35:10.424449 2824 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:35:10.424715 kubelet[2824]: I1106 00:35:10.424574 2824 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:35:10.424715 kubelet[2824]: I1106 00:35:10.424587 2824 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:35:10.424715 kubelet[2824]: I1106 00:35:10.424603 2824 policy_none.go:49] "None policy: Start" Nov 6 00:35:10.424715 kubelet[2824]: I1106 00:35:10.424612 2824 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:35:10.424715 kubelet[2824]: I1106 00:35:10.424623 2824 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:35:10.424846 kubelet[2824]: I1106 00:35:10.424749 2824 state_mem.go:75] "Updated machine memory state" Nov 6 00:35:10.430006 kubelet[2824]: E1106 00:35:10.429979 2824 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:35:10.430471 kubelet[2824]: I1106 00:35:10.430150 2824 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:35:10.430471 kubelet[2824]: I1106 00:35:10.430164 2824 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:35:10.430471 kubelet[2824]: I1106 00:35:10.430324 2824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:35:10.431894 kubelet[2824]: E1106 00:35:10.431835 2824 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:35:10.501358 kubelet[2824]: I1106 00:35:10.501289 2824 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:10.501486 kubelet[2824]: I1106 00:35:10.501380 2824 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:10.501510 kubelet[2824]: I1106 00:35:10.501321 2824 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:10.536367 kubelet[2824]: I1106 00:35:10.536305 2824 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:35:10.591107 kubelet[2824]: I1106 00:35:10.591018 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a997cbfbfe51acb4a9e341bf78fc274-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a997cbfbfe51acb4a9e341bf78fc274\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:10.591107 kubelet[2824]: I1106 00:35:10.591079 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:10.591107 kubelet[2824]: I1106 00:35:10.591101 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:10.591417 kubelet[2824]: I1106 00:35:10.591138 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:10.591417 kubelet[2824]: I1106 00:35:10.591161 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:10.591417 kubelet[2824]: I1106 00:35:10.591199 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:10.591417 kubelet[2824]: I1106 00:35:10.591234 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:35:10.591417 kubelet[2824]: I1106 00:35:10.591268 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a997cbfbfe51acb4a9e341bf78fc274-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a997cbfbfe51acb4a9e341bf78fc274\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:10.591560 kubelet[2824]: I1106 00:35:10.591288 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a997cbfbfe51acb4a9e341bf78fc274-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a997cbfbfe51acb4a9e341bf78fc274\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:10.678025 kubelet[2824]: E1106 00:35:10.676671 2824 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:10.679380 kubelet[2824]: I1106 00:35:10.679341 2824 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:35:10.679446 kubelet[2824]: I1106 00:35:10.679439 2824 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:35:10.958010 kubelet[2824]: E1106 00:35:10.957923 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:10.958010 kubelet[2824]: E1106 00:35:10.957951 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:10.977385 kubelet[2824]: E1106 00:35:10.977321 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:11.164527 kubelet[2824]: I1106 00:35:11.164112 2824 apiserver.go:52] "Watching apiserver" Nov 6 00:35:11.179584 kubelet[2824]: I1106 00:35:11.179534 2824 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:35:11.387888 kubelet[2824]: I1106 00:35:11.387673 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.387655586 podStartE2EDuration="3.387655586s" podCreationTimestamp="2025-11-06 00:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:35:11.387449749 +0000 UTC m=+1.294434815" watchObservedRunningTime="2025-11-06 00:35:11.387655586 +0000 UTC m=+1.294640652" Nov 6 00:35:11.410934 kubelet[2824]: I1106 00:35:11.410782 2824 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:11.410934 kubelet[2824]: I1106 00:35:11.410844 2824 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:11.410934 kubelet[2824]: E1106 00:35:11.410874 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:11.496664 kubelet[2824]: E1106 00:35:11.496559 2824 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:35:11.496862 kubelet[2824]: E1106 00:35:11.496838 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:11.498660 kubelet[2824]: E1106 00:35:11.498440 2824 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 00:35:11.498844 kubelet[2824]: E1106 00:35:11.498823 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:11.561571 kubelet[2824]: I1106 00:35:11.561417 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.561356311 podStartE2EDuration="1.561356311s" podCreationTimestamp="2025-11-06 00:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:35:11.560112321 +0000 UTC m=+1.467097387" watchObservedRunningTime="2025-11-06 00:35:11.561356311 +0000 UTC m=+1.468341387" Nov 6 00:35:11.562440 kubelet[2824]: I1106 00:35:11.561676 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.561666023 podStartE2EDuration="1.561666023s" podCreationTimestamp="2025-11-06 00:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:35:11.496996848 +0000 UTC m=+1.403981924" watchObservedRunningTime="2025-11-06 00:35:11.561666023 +0000 UTC m=+1.468651109" Nov 6 00:35:12.413156 kubelet[2824]: E1106 00:35:12.413099 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:12.413857 kubelet[2824]: E1106 00:35:12.413350 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:13.414302 kubelet[2824]: E1106 00:35:13.414267 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:14.416100 kubelet[2824]: E1106 00:35:14.416038 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:15.432211 update_engine[1617]: I20251106 00:35:15.432093 1617 update_attempter.cc:509] Updating boot flags... Nov 6 00:35:16.539414 kubelet[2824]: I1106 00:35:16.539369 2824 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:35:16.539958 containerd[1634]: time="2025-11-06T00:35:16.539766641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:35:16.540256 kubelet[2824]: I1106 00:35:16.539955 2824 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:35:17.289606 systemd[1]: Created slice kubepods-besteffort-pod742cb697_ed21_44bf_9196_cb54f5de2da4.slice - libcontainer container kubepods-besteffort-pod742cb697_ed21_44bf_9196_cb54f5de2da4.slice. Nov 6 00:35:17.327911 kubelet[2824]: I1106 00:35:17.327828 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/742cb697-ed21-44bf-9196-cb54f5de2da4-kube-proxy\") pod \"kube-proxy-9gqpf\" (UID: \"742cb697-ed21-44bf-9196-cb54f5de2da4\") " pod="kube-system/kube-proxy-9gqpf" Nov 6 00:35:17.327911 kubelet[2824]: I1106 00:35:17.327896 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/742cb697-ed21-44bf-9196-cb54f5de2da4-xtables-lock\") pod \"kube-proxy-9gqpf\" (UID: \"742cb697-ed21-44bf-9196-cb54f5de2da4\") " pod="kube-system/kube-proxy-9gqpf" Nov 6 00:35:17.327911 kubelet[2824]: I1106 00:35:17.327919 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/742cb697-ed21-44bf-9196-cb54f5de2da4-lib-modules\") pod \"kube-proxy-9gqpf\" (UID: \"742cb697-ed21-44bf-9196-cb54f5de2da4\") " pod="kube-system/kube-proxy-9gqpf" Nov 6 00:35:17.328149 kubelet[2824]: I1106 00:35:17.327945 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzntz\" (UniqueName: \"kubernetes.io/projected/742cb697-ed21-44bf-9196-cb54f5de2da4-kube-api-access-kzntz\") pod \"kube-proxy-9gqpf\" (UID: \"742cb697-ed21-44bf-9196-cb54f5de2da4\") " pod="kube-system/kube-proxy-9gqpf" Nov 6 00:35:17.499017 kubelet[2824]: E1106 00:35:17.498295 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:17.508655 systemd[1]: Created slice kubepods-besteffort-pode16f7a34_23d1_4c2c_a2a1_76bd5446d471.slice - libcontainer container kubepods-besteffort-pode16f7a34_23d1_4c2c_a2a1_76bd5446d471.slice. Nov 6 00:35:17.528915 kubelet[2824]: I1106 00:35:17.528847 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dqrs\" (UniqueName: \"kubernetes.io/projected/e16f7a34-23d1-4c2c-a2a1-76bd5446d471-kube-api-access-2dqrs\") pod \"tigera-operator-7dcd859c48-kwrrs\" (UID: \"e16f7a34-23d1-4c2c-a2a1-76bd5446d471\") " pod="tigera-operator/tigera-operator-7dcd859c48-kwrrs" Nov 6 00:35:17.528915 kubelet[2824]: I1106 00:35:17.528898 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e16f7a34-23d1-4c2c-a2a1-76bd5446d471-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kwrrs\" (UID: \"e16f7a34-23d1-4c2c-a2a1-76bd5446d471\") " pod="tigera-operator/tigera-operator-7dcd859c48-kwrrs" Nov 6 00:35:17.598410 kubelet[2824]: E1106 00:35:17.598256 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:17.599285 containerd[1634]: time="2025-11-06T00:35:17.598870516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gqpf,Uid:742cb697-ed21-44bf-9196-cb54f5de2da4,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:17.644113 containerd[1634]: time="2025-11-06T00:35:17.644059881Z" level=info msg="connecting to shim e06d0ed58eaa1130b7ee5134a2be7de9265e83cc908b0f02678cad7c6d09c240" address="unix:///run/containerd/s/c8e2ac5eefafbb27274f8fa7e485acdbb8afad2120f40026897da51ed2d8d134" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:17.696192 kubelet[2824]: E1106 00:35:17.696046 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:17.707865 systemd[1]: Started cri-containerd-e06d0ed58eaa1130b7ee5134a2be7de9265e83cc908b0f02678cad7c6d09c240.scope - libcontainer container e06d0ed58eaa1130b7ee5134a2be7de9265e83cc908b0f02678cad7c6d09c240. Nov 6 00:35:17.737924 containerd[1634]: time="2025-11-06T00:35:17.737869961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gqpf,Uid:742cb697-ed21-44bf-9196-cb54f5de2da4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e06d0ed58eaa1130b7ee5134a2be7de9265e83cc908b0f02678cad7c6d09c240\"" Nov 6 00:35:17.738594 kubelet[2824]: E1106 00:35:17.738573 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:17.744011 containerd[1634]: time="2025-11-06T00:35:17.743979182Z" level=info msg="CreateContainer within sandbox \"e06d0ed58eaa1130b7ee5134a2be7de9265e83cc908b0f02678cad7c6d09c240\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:35:17.754356 containerd[1634]: time="2025-11-06T00:35:17.754318983Z" level=info msg="Container 69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:17.763154 containerd[1634]: time="2025-11-06T00:35:17.763109293Z" level=info msg="CreateContainer within sandbox \"e06d0ed58eaa1130b7ee5134a2be7de9265e83cc908b0f02678cad7c6d09c240\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a\"" Nov 6 00:35:17.763655 containerd[1634]: time="2025-11-06T00:35:17.763592031Z" level=info msg="StartContainer for \"69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a\"" Nov 6 00:35:17.764935 containerd[1634]: time="2025-11-06T00:35:17.764908444Z" level=info msg="connecting to shim 69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a" address="unix:///run/containerd/s/c8e2ac5eefafbb27274f8fa7e485acdbb8afad2120f40026897da51ed2d8d134" protocol=ttrpc version=3 Nov 6 00:35:17.792872 systemd[1]: Started cri-containerd-69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a.scope - libcontainer container 69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a. Nov 6 00:35:17.813225 containerd[1634]: time="2025-11-06T00:35:17.813163971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kwrrs,Uid:e16f7a34-23d1-4c2c-a2a1-76bd5446d471,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:35:17.835879 containerd[1634]: time="2025-11-06T00:35:17.835786797Z" level=info msg="connecting to shim 62c6732329c926ca99b0a2cef16493f66cdc0589b767c40a797d33baac6cc836" address="unix:///run/containerd/s/5e92ce4fa1aa834310c15f3341114b891d7e243296be5950ed0c0006b4d258b0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:17.838055 containerd[1634]: time="2025-11-06T00:35:17.838009204Z" level=info msg="StartContainer for \"69b8172c2568afdb62452652c21085ac1d35b7c45afff122531bb05acbe9ab1a\" returns successfully" Nov 6 00:35:17.864874 systemd[1]: Started cri-containerd-62c6732329c926ca99b0a2cef16493f66cdc0589b767c40a797d33baac6cc836.scope - libcontainer container 62c6732329c926ca99b0a2cef16493f66cdc0589b767c40a797d33baac6cc836. Nov 6 00:35:17.913581 containerd[1634]: time="2025-11-06T00:35:17.913520213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kwrrs,Uid:e16f7a34-23d1-4c2c-a2a1-76bd5446d471,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"62c6732329c926ca99b0a2cef16493f66cdc0589b767c40a797d33baac6cc836\"" Nov 6 00:35:17.916941 containerd[1634]: time="2025-11-06T00:35:17.916290670Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:35:18.424973 kubelet[2824]: E1106 00:35:18.424720 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:18.424973 kubelet[2824]: E1106 00:35:18.424770 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:18.424973 kubelet[2824]: E1106 00:35:18.424893 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:18.443141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120163284.mount: Deactivated successfully. Nov 6 00:35:18.460975 kubelet[2824]: I1106 00:35:18.460820 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9gqpf" podStartSLOduration=1.460800614 podStartE2EDuration="1.460800614s" podCreationTimestamp="2025-11-06 00:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:35:18.460723589 +0000 UTC m=+8.367708665" watchObservedRunningTime="2025-11-06 00:35:18.460800614 +0000 UTC m=+8.367785680" Nov 6 00:35:19.189577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422842385.mount: Deactivated successfully. Nov 6 00:35:19.546424 containerd[1634]: time="2025-11-06T00:35:19.546299416Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:19.547262 containerd[1634]: time="2025-11-06T00:35:19.547181393Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:35:19.548696 containerd[1634]: time="2025-11-06T00:35:19.548613243Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:19.552126 containerd[1634]: time="2025-11-06T00:35:19.552077161Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:19.552945 containerd[1634]: time="2025-11-06T00:35:19.552884378Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.636521864s" Nov 6 00:35:19.552945 containerd[1634]: time="2025-11-06T00:35:19.552940904Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:35:19.557874 containerd[1634]: time="2025-11-06T00:35:19.557820672Z" level=info msg="CreateContainer within sandbox \"62c6732329c926ca99b0a2cef16493f66cdc0589b767c40a797d33baac6cc836\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:35:19.565045 containerd[1634]: time="2025-11-06T00:35:19.564990934Z" level=info msg="Container 66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:19.568612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496089434.mount: Deactivated successfully. Nov 6 00:35:19.570872 containerd[1634]: time="2025-11-06T00:35:19.570826669Z" level=info msg="CreateContainer within sandbox \"62c6732329c926ca99b0a2cef16493f66cdc0589b767c40a797d33baac6cc836\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350\"" Nov 6 00:35:19.571498 containerd[1634]: time="2025-11-06T00:35:19.571457925Z" level=info msg="StartContainer for \"66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350\"" Nov 6 00:35:19.572323 containerd[1634]: time="2025-11-06T00:35:19.572297132Z" level=info msg="connecting to shim 66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350" address="unix:///run/containerd/s/5e92ce4fa1aa834310c15f3341114b891d7e243296be5950ed0c0006b4d258b0" protocol=ttrpc version=3 Nov 6 00:35:19.602956 systemd[1]: Started cri-containerd-66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350.scope - libcontainer container 66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350. Nov 6 00:35:19.634920 containerd[1634]: time="2025-11-06T00:35:19.634858574Z" level=info msg="StartContainer for \"66361299326c9982dd78e7f040d4704eaac4950bf19eec630b08267ac1afc350\" returns successfully" Nov 6 00:35:23.174798 kubelet[2824]: E1106 00:35:23.174741 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:23.189809 kubelet[2824]: I1106 00:35:23.189725 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kwrrs" podStartSLOduration=4.551214384 podStartE2EDuration="6.189697923s" podCreationTimestamp="2025-11-06 00:35:17 +0000 UTC" firstStartedPulling="2025-11-06 00:35:17.915282345 +0000 UTC m=+7.822267401" lastFinishedPulling="2025-11-06 00:35:19.553765864 +0000 UTC m=+9.460750940" observedRunningTime="2025-11-06 00:35:20.438081516 +0000 UTC m=+10.345066582" watchObservedRunningTime="2025-11-06 00:35:23.189697923 +0000 UTC m=+13.096682990" Nov 6 00:35:23.435479 kubelet[2824]: E1106 00:35:23.435334 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:25.233487 sudo[1880]: pam_unix(sudo:session): session closed for user root Nov 6 00:35:25.236976 sshd[1879]: Connection closed by 10.0.0.1 port 51994 Nov 6 00:35:25.237328 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Nov 6 00:35:25.243236 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:51994.service: Deactivated successfully. Nov 6 00:35:25.246363 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:35:25.246703 systemd[1]: session-9.scope: Consumed 5.987s CPU time, 211.7M memory peak. Nov 6 00:35:25.250211 systemd-logind[1613]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:35:25.252457 systemd-logind[1613]: Removed session 9. Nov 6 00:35:29.161845 systemd[1]: Created slice kubepods-besteffort-pod8f07e8b9_6c66_4b15_8cdf_f8f677ba6bd0.slice - libcontainer container kubepods-besteffort-pod8f07e8b9_6c66_4b15_8cdf_f8f677ba6bd0.slice. Nov 6 00:35:29.202086 kubelet[2824]: I1106 00:35:29.202030 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0-typha-certs\") pod \"calico-typha-6f45d8cd7d-99txx\" (UID: \"8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0\") " pod="calico-system/calico-typha-6f45d8cd7d-99txx" Nov 6 00:35:29.202086 kubelet[2824]: I1106 00:35:29.202075 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0-tigera-ca-bundle\") pod \"calico-typha-6f45d8cd7d-99txx\" (UID: \"8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0\") " pod="calico-system/calico-typha-6f45d8cd7d-99txx" Nov 6 00:35:29.202086 kubelet[2824]: I1106 00:35:29.202093 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxknh\" (UniqueName: \"kubernetes.io/projected/8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0-kube-api-access-kxknh\") pod \"calico-typha-6f45d8cd7d-99txx\" (UID: \"8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0\") " pod="calico-system/calico-typha-6f45d8cd7d-99txx" Nov 6 00:35:29.452185 systemd[1]: Created slice kubepods-besteffort-pod86c26c7a_42a3_451f_bcd0_f9f1cc4bcde3.slice - libcontainer container kubepods-besteffort-pod86c26c7a_42a3_451f_bcd0_f9f1cc4bcde3.slice. Nov 6 00:35:29.467151 kubelet[2824]: E1106 00:35:29.467105 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:29.468018 containerd[1634]: time="2025-11-06T00:35:29.467960558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f45d8cd7d-99txx,Uid:8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:29.489240 containerd[1634]: time="2025-11-06T00:35:29.489181835Z" level=info msg="connecting to shim 0eb36c8cf4b6796ba0b5f22ec508ab9e33b65f7873bb12cf08ca28f7719bb7b0" address="unix:///run/containerd/s/460072502a4cf030e798d2f409e0d69565e185df07df98d0a5519e15aaa2f174" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:29.504647 kubelet[2824]: I1106 00:35:29.504549 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-cni-net-dir\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.504647 kubelet[2824]: I1106 00:35:29.504594 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-node-certs\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.504647 kubelet[2824]: I1106 00:35:29.504612 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2m5\" (UniqueName: \"kubernetes.io/projected/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-kube-api-access-zk2m5\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505111 kubelet[2824]: I1106 00:35:29.504934 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-cni-bin-dir\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505393 kubelet[2824]: I1106 00:35:29.505202 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-cni-log-dir\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505393 kubelet[2824]: I1106 00:35:29.505229 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-xtables-lock\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505393 kubelet[2824]: I1106 00:35:29.505253 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-flexvol-driver-host\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505393 kubelet[2824]: I1106 00:35:29.505271 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-policysync\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505393 kubelet[2824]: I1106 00:35:29.505286 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-lib-modules\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505535 kubelet[2824]: I1106 00:35:29.505299 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-tigera-ca-bundle\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505535 kubelet[2824]: I1106 00:35:29.505313 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-var-lib-calico\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.505535 kubelet[2824]: I1106 00:35:29.505327 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3-var-run-calico\") pod \"calico-node-74zgq\" (UID: \"86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3\") " pod="calico-system/calico-node-74zgq" Nov 6 00:35:29.518814 systemd[1]: Started cri-containerd-0eb36c8cf4b6796ba0b5f22ec508ab9e33b65f7873bb12cf08ca28f7719bb7b0.scope - libcontainer container 0eb36c8cf4b6796ba0b5f22ec508ab9e33b65f7873bb12cf08ca28f7719bb7b0. Nov 6 00:35:29.565579 kubelet[2824]: E1106 00:35:29.565512 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:29.582978 containerd[1634]: time="2025-11-06T00:35:29.582921983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f45d8cd7d-99txx,Uid:8f07e8b9-6c66-4b15-8cdf-f8f677ba6bd0,Namespace:calico-system,Attempt:0,} returns sandbox id \"0eb36c8cf4b6796ba0b5f22ec508ab9e33b65f7873bb12cf08ca28f7719bb7b0\"" Nov 6 00:35:29.583545 kubelet[2824]: E1106 00:35:29.583514 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:29.584546 containerd[1634]: time="2025-11-06T00:35:29.584515294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:35:29.605788 kubelet[2824]: I1106 00:35:29.605758 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/299ba27c-7f4c-4b4c-bf27-d7e11dc57242-kubelet-dir\") pod \"csi-node-driver-xp8pl\" (UID: \"299ba27c-7f4c-4b4c-bf27-d7e11dc57242\") " pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:29.605946 kubelet[2824]: I1106 00:35:29.605916 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/299ba27c-7f4c-4b4c-bf27-d7e11dc57242-socket-dir\") pod \"csi-node-driver-xp8pl\" (UID: \"299ba27c-7f4c-4b4c-bf27-d7e11dc57242\") " pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:29.606186 kubelet[2824]: I1106 00:35:29.606164 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/299ba27c-7f4c-4b4c-bf27-d7e11dc57242-registration-dir\") pod \"csi-node-driver-xp8pl\" (UID: \"299ba27c-7f4c-4b4c-bf27-d7e11dc57242\") " pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:29.606231 kubelet[2824]: I1106 00:35:29.606218 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7hl\" (UniqueName: \"kubernetes.io/projected/299ba27c-7f4c-4b4c-bf27-d7e11dc57242-kube-api-access-zt7hl\") pod \"csi-node-driver-xp8pl\" (UID: \"299ba27c-7f4c-4b4c-bf27-d7e11dc57242\") " pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:29.606712 kubelet[2824]: E1106 00:35:29.606683 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.606712 kubelet[2824]: W1106 00:35:29.606710 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.608482 kubelet[2824]: E1106 00:35:29.608458 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.608741 kubelet[2824]: E1106 00:35:29.608714 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.608741 kubelet[2824]: W1106 00:35:29.608731 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.608741 kubelet[2824]: E1106 00:35:29.608742 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.612047 kubelet[2824]: E1106 00:35:29.611885 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.612047 kubelet[2824]: W1106 00:35:29.611904 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.612047 kubelet[2824]: E1106 00:35:29.611924 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.613929 kubelet[2824]: E1106 00:35:29.613896 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.613929 kubelet[2824]: W1106 00:35:29.613923 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.614003 kubelet[2824]: E1106 00:35:29.613940 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.614182 kubelet[2824]: E1106 00:35:29.614161 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.614182 kubelet[2824]: W1106 00:35:29.614176 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.614252 kubelet[2824]: E1106 00:35:29.614186 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.614398 kubelet[2824]: E1106 00:35:29.614375 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.614398 kubelet[2824]: W1106 00:35:29.614389 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.614398 kubelet[2824]: E1106 00:35:29.614398 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.614601 kubelet[2824]: E1106 00:35:29.614585 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.614601 kubelet[2824]: W1106 00:35:29.614596 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.614671 kubelet[2824]: E1106 00:35:29.614605 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.614922 kubelet[2824]: E1106 00:35:29.614903 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.614922 kubelet[2824]: W1106 00:35:29.614917 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.614996 kubelet[2824]: E1106 00:35:29.614928 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.615480 kubelet[2824]: E1106 00:35:29.615358 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.615480 kubelet[2824]: W1106 00:35:29.615371 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.615480 kubelet[2824]: E1106 00:35:29.615382 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.617382 kubelet[2824]: E1106 00:35:29.617340 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.617382 kubelet[2824]: W1106 00:35:29.617356 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.617382 kubelet[2824]: E1106 00:35:29.617367 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.618190 kubelet[2824]: E1106 00:35:29.618167 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.618190 kubelet[2824]: W1106 00:35:29.618183 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.618190 kubelet[2824]: E1106 00:35:29.618195 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.618446 kubelet[2824]: E1106 00:35:29.618425 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.618446 kubelet[2824]: W1106 00:35:29.618440 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.618524 kubelet[2824]: E1106 00:35:29.618451 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.618524 kubelet[2824]: I1106 00:35:29.618479 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/299ba27c-7f4c-4b4c-bf27-d7e11dc57242-varrun\") pod \"csi-node-driver-xp8pl\" (UID: \"299ba27c-7f4c-4b4c-bf27-d7e11dc57242\") " pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:29.618809 kubelet[2824]: E1106 00:35:29.618759 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.618809 kubelet[2824]: W1106 00:35:29.618813 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.618991 kubelet[2824]: E1106 00:35:29.618840 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.619097 kubelet[2824]: E1106 00:35:29.619084 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.619097 kubelet[2824]: W1106 00:35:29.619094 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.619173 kubelet[2824]: E1106 00:35:29.619113 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.619384 kubelet[2824]: E1106 00:35:29.619358 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.619384 kubelet[2824]: W1106 00:35:29.619378 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.619474 kubelet[2824]: E1106 00:35:29.619394 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.619603 kubelet[2824]: E1106 00:35:29.619586 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.619603 kubelet[2824]: W1106 00:35:29.619597 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.619691 kubelet[2824]: E1106 00:35:29.619613 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.620009 kubelet[2824]: E1106 00:35:29.619991 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.620009 kubelet[2824]: W1106 00:35:29.620007 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.620598 kubelet[2824]: E1106 00:35:29.620018 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.620828 kubelet[2824]: E1106 00:35:29.620810 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.620828 kubelet[2824]: W1106 00:35:29.620824 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.620891 kubelet[2824]: E1106 00:35:29.620834 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.628410 kubelet[2824]: E1106 00:35:29.628378 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.628550 kubelet[2824]: W1106 00:35:29.628420 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.628550 kubelet[2824]: E1106 00:35:29.628437 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.721239 kubelet[2824]: E1106 00:35:29.721189 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.721239 kubelet[2824]: W1106 00:35:29.721212 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.721239 kubelet[2824]: E1106 00:35:29.721232 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.721494 kubelet[2824]: E1106 00:35:29.721474 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.721494 kubelet[2824]: W1106 00:35:29.721486 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.721494 kubelet[2824]: E1106 00:35:29.721495 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.721736 kubelet[2824]: E1106 00:35:29.721711 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.721736 kubelet[2824]: W1106 00:35:29.721725 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.721736 kubelet[2824]: E1106 00:35:29.721734 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.721922 kubelet[2824]: E1106 00:35:29.721907 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.721922 kubelet[2824]: W1106 00:35:29.721918 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.721964 kubelet[2824]: E1106 00:35:29.721926 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.722157 kubelet[2824]: E1106 00:35:29.722126 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.722157 kubelet[2824]: W1106 00:35:29.722137 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.722157 kubelet[2824]: E1106 00:35:29.722145 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.722450 kubelet[2824]: E1106 00:35:29.722417 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.722450 kubelet[2824]: W1106 00:35:29.722441 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.722497 kubelet[2824]: E1106 00:35:29.722460 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.722668 kubelet[2824]: E1106 00:35:29.722651 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.722668 kubelet[2824]: W1106 00:35:29.722665 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.722727 kubelet[2824]: E1106 00:35:29.722674 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.722861 kubelet[2824]: E1106 00:35:29.722845 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.722861 kubelet[2824]: W1106 00:35:29.722855 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.722907 kubelet[2824]: E1106 00:35:29.722864 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.723044 kubelet[2824]: E1106 00:35:29.723030 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.723044 kubelet[2824]: W1106 00:35:29.723040 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.723091 kubelet[2824]: E1106 00:35:29.723049 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.723248 kubelet[2824]: E1106 00:35:29.723232 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.723248 kubelet[2824]: W1106 00:35:29.723243 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.723296 kubelet[2824]: E1106 00:35:29.723251 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.723425 kubelet[2824]: E1106 00:35:29.723410 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.723425 kubelet[2824]: W1106 00:35:29.723420 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.723478 kubelet[2824]: E1106 00:35:29.723428 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.723617 kubelet[2824]: E1106 00:35:29.723602 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.723617 kubelet[2824]: W1106 00:35:29.723613 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.723692 kubelet[2824]: E1106 00:35:29.723621 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.723845 kubelet[2824]: E1106 00:35:29.723830 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.723869 kubelet[2824]: W1106 00:35:29.723850 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.723869 kubelet[2824]: E1106 00:35:29.723859 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.724069 kubelet[2824]: E1106 00:35:29.724052 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.724069 kubelet[2824]: W1106 00:35:29.724062 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.724069 kubelet[2824]: E1106 00:35:29.724070 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.724262 kubelet[2824]: E1106 00:35:29.724241 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.724262 kubelet[2824]: W1106 00:35:29.724252 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.724262 kubelet[2824]: E1106 00:35:29.724260 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.724498 kubelet[2824]: E1106 00:35:29.724482 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.724498 kubelet[2824]: W1106 00:35:29.724495 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.724548 kubelet[2824]: E1106 00:35:29.724505 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.724691 kubelet[2824]: E1106 00:35:29.724675 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.724691 kubelet[2824]: W1106 00:35:29.724686 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.724742 kubelet[2824]: E1106 00:35:29.724695 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.724883 kubelet[2824]: E1106 00:35:29.724868 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.724883 kubelet[2824]: W1106 00:35:29.724878 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.724946 kubelet[2824]: E1106 00:35:29.724887 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.725051 kubelet[2824]: E1106 00:35:29.725036 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.725051 kubelet[2824]: W1106 00:35:29.725046 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.725105 kubelet[2824]: E1106 00:35:29.725053 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.725255 kubelet[2824]: E1106 00:35:29.725240 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.725255 kubelet[2824]: W1106 00:35:29.725250 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.725306 kubelet[2824]: E1106 00:35:29.725259 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.725424 kubelet[2824]: E1106 00:35:29.725410 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.725424 kubelet[2824]: W1106 00:35:29.725421 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.725467 kubelet[2824]: E1106 00:35:29.725429 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.725658 kubelet[2824]: E1106 00:35:29.725628 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.725658 kubelet[2824]: W1106 00:35:29.725653 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.725717 kubelet[2824]: E1106 00:35:29.725663 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.725834 kubelet[2824]: E1106 00:35:29.725819 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.725834 kubelet[2824]: W1106 00:35:29.725830 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.725877 kubelet[2824]: E1106 00:35:29.725837 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.726025 kubelet[2824]: E1106 00:35:29.726010 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.726025 kubelet[2824]: W1106 00:35:29.726021 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.726078 kubelet[2824]: E1106 00:35:29.726030 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.726493 kubelet[2824]: E1106 00:35:29.726467 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.726493 kubelet[2824]: W1106 00:35:29.726490 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.726566 kubelet[2824]: E1106 00:35:29.726511 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.733593 kubelet[2824]: E1106 00:35:29.733542 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:29.733593 kubelet[2824]: W1106 00:35:29.733557 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:29.733593 kubelet[2824]: E1106 00:35:29.733568 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:29.755695 kubelet[2824]: E1106 00:35:29.755650 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:29.756780 containerd[1634]: time="2025-11-06T00:35:29.756713310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-74zgq,Uid:86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:29.777941 containerd[1634]: time="2025-11-06T00:35:29.777902366Z" level=info msg="connecting to shim 5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90" address="unix:///run/containerd/s/af27c412662fafe7c8011099e85052605dff1e97b4c03bed69cd3c035903803a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:29.801790 systemd[1]: Started cri-containerd-5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90.scope - libcontainer container 5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90. Nov 6 00:35:29.832987 containerd[1634]: time="2025-11-06T00:35:29.832940463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-74zgq,Uid:86c26c7a-42a3-451f-bcd0-f9f1cc4bcde3,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\"" Nov 6 00:35:29.833554 kubelet[2824]: E1106 00:35:29.833521 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:30.952460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593746443.mount: Deactivated successfully. Nov 6 00:35:31.400172 kubelet[2824]: E1106 00:35:31.400080 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:32.387434 containerd[1634]: time="2025-11-06T00:35:32.387340560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:32.388144 containerd[1634]: time="2025-11-06T00:35:32.388096549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:35:32.389197 containerd[1634]: time="2025-11-06T00:35:32.389155977Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:32.391083 containerd[1634]: time="2025-11-06T00:35:32.391044521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:32.391581 containerd[1634]: time="2025-11-06T00:35:32.391534150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.806991425s" Nov 6 00:35:32.391581 containerd[1634]: time="2025-11-06T00:35:32.391579585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:35:32.392836 containerd[1634]: time="2025-11-06T00:35:32.392804465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:35:32.404111 containerd[1634]: time="2025-11-06T00:35:32.404064578Z" level=info msg="CreateContainer within sandbox \"0eb36c8cf4b6796ba0b5f22ec508ab9e33b65f7873bb12cf08ca28f7719bb7b0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:35:32.411474 containerd[1634]: time="2025-11-06T00:35:32.411439390Z" level=info msg="Container b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:32.418165 containerd[1634]: time="2025-11-06T00:35:32.418059384Z" level=info msg="CreateContainer within sandbox \"0eb36c8cf4b6796ba0b5f22ec508ab9e33b65f7873bb12cf08ca28f7719bb7b0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5\"" Nov 6 00:35:32.418538 containerd[1634]: time="2025-11-06T00:35:32.418520039Z" level=info msg="StartContainer for \"b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5\"" Nov 6 00:35:32.419522 containerd[1634]: time="2025-11-06T00:35:32.419494768Z" level=info msg="connecting to shim b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5" address="unix:///run/containerd/s/460072502a4cf030e798d2f409e0d69565e185df07df98d0a5519e15aaa2f174" protocol=ttrpc version=3 Nov 6 00:35:32.447897 systemd[1]: Started cri-containerd-b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5.scope - libcontainer container b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5. Nov 6 00:35:32.501849 containerd[1634]: time="2025-11-06T00:35:32.501694686Z" level=info msg="StartContainer for \"b310c481c1f7c19b06d067c3b94a234b2b7ac79afe7e872615d5517c303c83e5\" returns successfully" Nov 6 00:35:33.403128 kubelet[2824]: E1106 00:35:33.400826 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:33.498238 kubelet[2824]: E1106 00:35:33.494706 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.514957 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.522155 kubelet[2824]: W1106 00:35:33.514993 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.515017 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.519751 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.522155 kubelet[2824]: W1106 00:35:33.519779 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.519805 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.520745 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.522155 kubelet[2824]: W1106 00:35:33.520759 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.520771 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.522155 kubelet[2824]: E1106 00:35:33.521743 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.522587 kubelet[2824]: W1106 00:35:33.521756 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.522587 kubelet[2824]: E1106 00:35:33.521770 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.522714 kubelet[2824]: E1106 00:35:33.522688 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.522754 kubelet[2824]: W1106 00:35:33.522729 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.522754 kubelet[2824]: E1106 00:35:33.522745 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.525093 kubelet[2824]: E1106 00:35:33.524202 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.525093 kubelet[2824]: W1106 00:35:33.524217 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.525093 kubelet[2824]: E1106 00:35:33.524230 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.525745 kubelet[2824]: E1106 00:35:33.525718 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.525745 kubelet[2824]: W1106 00:35:33.525737 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.525745 kubelet[2824]: E1106 00:35:33.525750 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.528384 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.531122 kubelet[2824]: W1106 00:35:33.528400 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.528414 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.529687 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.531122 kubelet[2824]: W1106 00:35:33.529699 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.529711 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.529923 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.531122 kubelet[2824]: W1106 00:35:33.529931 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.529941 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.531122 kubelet[2824]: E1106 00:35:33.531041 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.531617 kubelet[2824]: W1106 00:35:33.531067 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.531617 kubelet[2824]: E1106 00:35:33.531078 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.542845 kubelet[2824]: E1106 00:35:33.542801 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.542845 kubelet[2824]: W1106 00:35:33.542832 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.542845 kubelet[2824]: E1106 00:35:33.542854 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.546105 kubelet[2824]: E1106 00:35:33.545368 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.546105 kubelet[2824]: W1106 00:35:33.545398 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.546105 kubelet[2824]: E1106 00:35:33.545423 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.546105 kubelet[2824]: E1106 00:35:33.545707 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.546105 kubelet[2824]: W1106 00:35:33.545718 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.546105 kubelet[2824]: E1106 00:35:33.545729 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.547392 kubelet[2824]: E1106 00:35:33.547351 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.547469 kubelet[2824]: W1106 00:35:33.547426 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.547469 kubelet[2824]: E1106 00:35:33.547446 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.569252 kubelet[2824]: E1106 00:35:33.563086 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.569252 kubelet[2824]: W1106 00:35:33.569136 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.569252 kubelet[2824]: E1106 00:35:33.569189 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.572091 kubelet[2824]: E1106 00:35:33.571379 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.572091 kubelet[2824]: W1106 00:35:33.571404 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.572091 kubelet[2824]: E1106 00:35:33.571423 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.572091 kubelet[2824]: E1106 00:35:33.571853 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.572091 kubelet[2824]: W1106 00:35:33.571865 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.572091 kubelet[2824]: E1106 00:35:33.571878 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.583120 kubelet[2824]: E1106 00:35:33.580282 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.583120 kubelet[2824]: W1106 00:35:33.580322 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.583120 kubelet[2824]: E1106 00:35:33.580354 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.583120 kubelet[2824]: E1106 00:35:33.580795 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.583120 kubelet[2824]: W1106 00:35:33.580811 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.583120 kubelet[2824]: E1106 00:35:33.580822 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.583120 kubelet[2824]: E1106 00:35:33.581126 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.583120 kubelet[2824]: W1106 00:35:33.581137 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.583120 kubelet[2824]: E1106 00:35:33.581147 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.587014 kubelet[2824]: I1106 00:35:33.586622 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f45d8cd7d-99txx" podStartSLOduration=1.77858978 podStartE2EDuration="4.586577796s" podCreationTimestamp="2025-11-06 00:35:29 +0000 UTC" firstStartedPulling="2025-11-06 00:35:29.584259523 +0000 UTC m=+19.491244589" lastFinishedPulling="2025-11-06 00:35:32.392247539 +0000 UTC m=+22.299232605" observedRunningTime="2025-11-06 00:35:33.542360801 +0000 UTC m=+23.449345877" watchObservedRunningTime="2025-11-06 00:35:33.586577796 +0000 UTC m=+23.493562862" Nov 6 00:35:33.596147 kubelet[2824]: E1106 00:35:33.595736 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.596147 kubelet[2824]: W1106 00:35:33.595779 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.596147 kubelet[2824]: E1106 00:35:33.595809 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.596147 kubelet[2824]: E1106 00:35:33.596077 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.596147 kubelet[2824]: W1106 00:35:33.596089 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.596147 kubelet[2824]: E1106 00:35:33.596101 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.601083 kubelet[2824]: E1106 00:35:33.601014 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.601083 kubelet[2824]: W1106 00:35:33.601068 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.601319 kubelet[2824]: E1106 00:35:33.601157 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.602372 kubelet[2824]: E1106 00:35:33.602147 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.602372 kubelet[2824]: W1106 00:35:33.602170 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.602372 kubelet[2824]: E1106 00:35:33.602184 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.605869 kubelet[2824]: E1106 00:35:33.602765 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.605869 kubelet[2824]: W1106 00:35:33.602779 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.605869 kubelet[2824]: E1106 00:35:33.602793 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.608181 kubelet[2824]: E1106 00:35:33.606481 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.608181 kubelet[2824]: W1106 00:35:33.606510 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.608181 kubelet[2824]: E1106 00:35:33.606537 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.609681 kubelet[2824]: E1106 00:35:33.608891 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.609681 kubelet[2824]: W1106 00:35:33.608916 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.609681 kubelet[2824]: E1106 00:35:33.608938 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.611849 kubelet[2824]: E1106 00:35:33.610191 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.611849 kubelet[2824]: W1106 00:35:33.610207 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.611849 kubelet[2824]: E1106 00:35:33.610223 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.613034 kubelet[2824]: E1106 00:35:33.612273 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.613034 kubelet[2824]: W1106 00:35:33.612299 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.613034 kubelet[2824]: E1106 00:35:33.612317 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.617068 kubelet[2824]: E1106 00:35:33.616118 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.617068 kubelet[2824]: W1106 00:35:33.616145 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.617068 kubelet[2824]: E1106 00:35:33.616169 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.619087 kubelet[2824]: E1106 00:35:33.617625 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.619087 kubelet[2824]: W1106 00:35:33.617684 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.619087 kubelet[2824]: E1106 00:35:33.617699 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.619579 kubelet[2824]: E1106 00:35:33.619546 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:35:33.619579 kubelet[2824]: W1106 00:35:33.619571 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:35:33.619788 kubelet[2824]: E1106 00:35:33.619583 2824 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:35:33.971829 containerd[1634]: time="2025-11-06T00:35:33.971687406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:33.973752 containerd[1634]: time="2025-11-06T00:35:33.973676349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:35:33.982100 containerd[1634]: time="2025-11-06T00:35:33.980031196Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:33.985797 containerd[1634]: time="2025-11-06T00:35:33.985622439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:33.994103 containerd[1634]: time="2025-11-06T00:35:33.988335842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.595489549s" Nov 6 00:35:33.994103 containerd[1634]: time="2025-11-06T00:35:33.989851076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:35:34.001213 containerd[1634]: time="2025-11-06T00:35:33.997101352Z" level=info msg="CreateContainer within sandbox \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:35:34.033085 containerd[1634]: time="2025-11-06T00:35:34.031547679Z" level=info msg="Container fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:34.050616 containerd[1634]: time="2025-11-06T00:35:34.050357659Z" level=info msg="CreateContainer within sandbox \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\"" Nov 6 00:35:34.052323 containerd[1634]: time="2025-11-06T00:35:34.052281148Z" level=info msg="StartContainer for \"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\"" Nov 6 00:35:34.056290 containerd[1634]: time="2025-11-06T00:35:34.055993786Z" level=info msg="connecting to shim fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722" address="unix:///run/containerd/s/af27c412662fafe7c8011099e85052605dff1e97b4c03bed69cd3c035903803a" protocol=ttrpc version=3 Nov 6 00:35:34.098165 systemd[1]: Started cri-containerd-fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722.scope - libcontainer container fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722. Nov 6 00:35:34.233315 containerd[1634]: time="2025-11-06T00:35:34.233159519Z" level=info msg="StartContainer for \"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\" returns successfully" Nov 6 00:35:34.242493 systemd[1]: cri-containerd-fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722.scope: Deactivated successfully. Nov 6 00:35:34.248357 containerd[1634]: time="2025-11-06T00:35:34.247628493Z" level=info msg="received exit event container_id:\"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\" id:\"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\" pid:3510 exited_at:{seconds:1762389334 nanos:246977892}" Nov 6 00:35:34.248357 containerd[1634]: time="2025-11-06T00:35:34.247997997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\" id:\"fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722\" pid:3510 exited_at:{seconds:1762389334 nanos:246977892}" Nov 6 00:35:34.321802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe676306f2cf4dbd479bc7b52dc49667bce0475e82efb220c1123f08e7e09722-rootfs.mount: Deactivated successfully. Nov 6 00:35:34.509520 kubelet[2824]: E1106 00:35:34.506374 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:34.510966 kubelet[2824]: E1106 00:35:34.510928 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:35.401306 kubelet[2824]: E1106 00:35:35.401140 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:35.517702 kubelet[2824]: E1106 00:35:35.515301 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:35.517702 kubelet[2824]: E1106 00:35:35.515605 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:35.518510 containerd[1634]: time="2025-11-06T00:35:35.516693168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:35:37.403808 kubelet[2824]: E1106 00:35:37.401443 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:39.400774 kubelet[2824]: E1106 00:35:39.400716 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:41.137495 containerd[1634]: time="2025-11-06T00:35:41.137416975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:41.138200 containerd[1634]: time="2025-11-06T00:35:41.138170980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:35:41.197085 containerd[1634]: time="2025-11-06T00:35:41.196997852Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:41.233043 containerd[1634]: time="2025-11-06T00:35:41.232963280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:41.233811 containerd[1634]: time="2025-11-06T00:35:41.233737072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.716975995s" Nov 6 00:35:41.233811 containerd[1634]: time="2025-11-06T00:35:41.233793117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:35:41.238043 containerd[1634]: time="2025-11-06T00:35:41.237513136Z" level=info msg="CreateContainer within sandbox \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:35:41.247184 containerd[1634]: time="2025-11-06T00:35:41.247125561Z" level=info msg="Container d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:41.256003 containerd[1634]: time="2025-11-06T00:35:41.255948724Z" level=info msg="CreateContainer within sandbox \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\"" Nov 6 00:35:41.256433 containerd[1634]: time="2025-11-06T00:35:41.256398830Z" level=info msg="StartContainer for \"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\"" Nov 6 00:35:41.257872 containerd[1634]: time="2025-11-06T00:35:41.257847016Z" level=info msg="connecting to shim d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2" address="unix:///run/containerd/s/af27c412662fafe7c8011099e85052605dff1e97b4c03bed69cd3c035903803a" protocol=ttrpc version=3 Nov 6 00:35:41.289011 systemd[1]: Started cri-containerd-d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2.scope - libcontainer container d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2. Nov 6 00:35:41.337659 containerd[1634]: time="2025-11-06T00:35:41.337602123Z" level=info msg="StartContainer for \"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\" returns successfully" Nov 6 00:35:41.400420 kubelet[2824]: E1106 00:35:41.400204 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:41.533607 kubelet[2824]: E1106 00:35:41.533533 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:42.536936 kubelet[2824]: E1106 00:35:42.536873 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:42.771767 systemd[1]: cri-containerd-d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2.scope: Deactivated successfully. Nov 6 00:35:42.772866 containerd[1634]: time="2025-11-06T00:35:42.772815337Z" level=info msg="received exit event container_id:\"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\" id:\"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\" pid:3567 exited_at:{seconds:1762389342 nanos:772524271}" Nov 6 00:35:42.773158 containerd[1634]: time="2025-11-06T00:35:42.772928749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\" id:\"d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2\" pid:3567 exited_at:{seconds:1762389342 nanos:772524271}" Nov 6 00:35:42.774564 systemd[1]: cri-containerd-d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2.scope: Consumed 671ms CPU time, 177.2M memory peak, 3.7M read from disk, 171.3M written to disk. Nov 6 00:35:42.799555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d76a3ceb26d2bad81281590ef57a366afddf4e96d3109cb2a0aeb48471250ee2-rootfs.mount: Deactivated successfully. Nov 6 00:35:43.247027 kubelet[2824]: I1106 00:35:43.246930 2824 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:35:43.407852 systemd[1]: Created slice kubepods-besteffort-pod299ba27c_7f4c_4b4c_bf27_d7e11dc57242.slice - libcontainer container kubepods-besteffort-pod299ba27c_7f4c_4b4c_bf27_d7e11dc57242.slice. Nov 6 00:35:43.410481 containerd[1634]: time="2025-11-06T00:35:43.410442269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xp8pl,Uid:299ba27c-7f4c-4b4c-bf27-d7e11dc57242,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:43.582117 kubelet[2824]: E1106 00:35:43.581670 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:43.583841 containerd[1634]: time="2025-11-06T00:35:43.583780227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:35:43.871564 containerd[1634]: time="2025-11-06T00:35:43.871412252Z" level=error msg="Failed to destroy network for sandbox \"54078cae87c5afc1eea7196cbe1f9ac28ab75e9793c0b6eb831fa24959b4e34d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:43.873772 systemd[1]: run-netns-cni\x2dfa19026a\x2d3519\x2d6bda\x2d5561\x2de364b13a90ce.mount: Deactivated successfully. Nov 6 00:35:43.983470 containerd[1634]: time="2025-11-06T00:35:43.983397157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xp8pl,Uid:299ba27c-7f4c-4b4c-bf27-d7e11dc57242,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54078cae87c5afc1eea7196cbe1f9ac28ab75e9793c0b6eb831fa24959b4e34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:43.983734 kubelet[2824]: E1106 00:35:43.983675 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54078cae87c5afc1eea7196cbe1f9ac28ab75e9793c0b6eb831fa24959b4e34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:43.983845 kubelet[2824]: E1106 00:35:43.983753 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54078cae87c5afc1eea7196cbe1f9ac28ab75e9793c0b6eb831fa24959b4e34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:43.983845 kubelet[2824]: E1106 00:35:43.983783 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54078cae87c5afc1eea7196cbe1f9ac28ab75e9793c0b6eb831fa24959b4e34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:43.983911 kubelet[2824]: E1106 00:35:43.983834 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54078cae87c5afc1eea7196cbe1f9ac28ab75e9793c0b6eb831fa24959b4e34d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:44.638035 systemd[1]: Created slice kubepods-besteffort-pod92f7c50a_0661_49a3_b7e2_4ee539768f1e.slice - libcontainer container kubepods-besteffort-pod92f7c50a_0661_49a3_b7e2_4ee539768f1e.slice. Nov 6 00:35:44.697418 kubelet[2824]: I1106 00:35:44.697373 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7845\" (UniqueName: \"kubernetes.io/projected/92f7c50a-0661-49a3-b7e2-4ee539768f1e-kube-api-access-h7845\") pod \"whisker-d877dd957-pl5hq\" (UID: \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\") " pod="calico-system/whisker-d877dd957-pl5hq" Nov 6 00:35:44.697418 kubelet[2824]: I1106 00:35:44.697427 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-backend-key-pair\") pod \"whisker-d877dd957-pl5hq\" (UID: \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\") " pod="calico-system/whisker-d877dd957-pl5hq" Nov 6 00:35:44.697904 kubelet[2824]: I1106 00:35:44.697445 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-ca-bundle\") pod \"whisker-d877dd957-pl5hq\" (UID: \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\") " pod="calico-system/whisker-d877dd957-pl5hq" Nov 6 00:35:44.801379 systemd[1]: Created slice kubepods-besteffort-pod37bfba89_7ef7_48f7_8ad4_1de208225932.slice - libcontainer container kubepods-besteffort-pod37bfba89_7ef7_48f7_8ad4_1de208225932.slice. Nov 6 00:35:44.899324 kubelet[2824]: I1106 00:35:44.899060 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spzrv\" (UniqueName: \"kubernetes.io/projected/37bfba89-7ef7-48f7-8ad4-1de208225932-kube-api-access-spzrv\") pod \"calico-apiserver-647c87d985-2h5ss\" (UID: \"37bfba89-7ef7-48f7-8ad4-1de208225932\") " pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" Nov 6 00:35:44.899324 kubelet[2824]: I1106 00:35:44.899118 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37bfba89-7ef7-48f7-8ad4-1de208225932-calico-apiserver-certs\") pod \"calico-apiserver-647c87d985-2h5ss\" (UID: \"37bfba89-7ef7-48f7-8ad4-1de208225932\") " pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" Nov 6 00:35:44.900169 systemd[1]: Created slice kubepods-besteffort-pod58f56521_b0ee_46b1_8476_68ff3e34496b.slice - libcontainer container kubepods-besteffort-pod58f56521_b0ee_46b1_8476_68ff3e34496b.slice. Nov 6 00:35:44.986169 systemd[1]: Created slice kubepods-besteffort-pod30a2a173_30b9_41b2_8ef6_9137cb1fe89a.slice - libcontainer container kubepods-besteffort-pod30a2a173_30b9_41b2_8ef6_9137cb1fe89a.slice. Nov 6 00:35:45.000427 kubelet[2824]: I1106 00:35:45.000346 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsb6b\" (UniqueName: \"kubernetes.io/projected/58f56521-b0ee-46b1-8476-68ff3e34496b-kube-api-access-lsb6b\") pod \"calico-apiserver-647c87d985-dskgb\" (UID: \"58f56521-b0ee-46b1-8476-68ff3e34496b\") " pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" Nov 6 00:35:45.000427 kubelet[2824]: I1106 00:35:45.000435 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58f56521-b0ee-46b1-8476-68ff3e34496b-calico-apiserver-certs\") pod \"calico-apiserver-647c87d985-dskgb\" (UID: \"58f56521-b0ee-46b1-8476-68ff3e34496b\") " pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" Nov 6 00:35:45.101276 kubelet[2824]: I1106 00:35:45.101137 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qhq\" (UniqueName: \"kubernetes.io/projected/30a2a173-30b9-41b2-8ef6-9137cb1fe89a-kube-api-access-96qhq\") pod \"calico-kube-controllers-75ddbfb7b-znt4c\" (UID: \"30a2a173-30b9-41b2-8ef6-9137cb1fe89a\") " pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" Nov 6 00:35:45.101276 kubelet[2824]: I1106 00:35:45.101200 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a2a173-30b9-41b2-8ef6-9137cb1fe89a-tigera-ca-bundle\") pod \"calico-kube-controllers-75ddbfb7b-znt4c\" (UID: \"30a2a173-30b9-41b2-8ef6-9137cb1fe89a\") " pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" Nov 6 00:35:45.177856 systemd[1]: Created slice kubepods-besteffort-podcb6ef055_21f2_4f63_9dca_424807e07ebf.slice - libcontainer container kubepods-besteffort-podcb6ef055_21f2_4f63_9dca_424807e07ebf.slice. Nov 6 00:35:45.212960 containerd[1634]: time="2025-11-06T00:35:45.212903394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-dskgb,Uid:58f56521-b0ee-46b1-8476-68ff3e34496b,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:35:45.242391 containerd[1634]: time="2025-11-06T00:35:45.242294642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d877dd957-pl5hq,Uid:92f7c50a-0661-49a3-b7e2-4ee539768f1e,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:45.284068 systemd[1]: Created slice kubepods-burstable-pod006a087d_1905_41d2_83ba_5643fdd121c4.slice - libcontainer container kubepods-burstable-pod006a087d_1905_41d2_83ba_5643fdd121c4.slice. Nov 6 00:35:45.290057 containerd[1634]: time="2025-11-06T00:35:45.290004988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ddbfb7b-znt4c,Uid:30a2a173-30b9-41b2-8ef6-9137cb1fe89a,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:45.303496 kubelet[2824]: I1106 00:35:45.303381 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb6ef055-21f2-4f63-9dca-424807e07ebf-goldmane-ca-bundle\") pod \"goldmane-666569f655-c5bfl\" (UID: \"cb6ef055-21f2-4f63-9dca-424807e07ebf\") " pod="calico-system/goldmane-666569f655-c5bfl" Nov 6 00:35:45.303496 kubelet[2824]: I1106 00:35:45.303465 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6ef055-21f2-4f63-9dca-424807e07ebf-config\") pod \"goldmane-666569f655-c5bfl\" (UID: \"cb6ef055-21f2-4f63-9dca-424807e07ebf\") " pod="calico-system/goldmane-666569f655-c5bfl" Nov 6 00:35:45.303496 kubelet[2824]: I1106 00:35:45.303488 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cb6ef055-21f2-4f63-9dca-424807e07ebf-goldmane-key-pair\") pod \"goldmane-666569f655-c5bfl\" (UID: \"cb6ef055-21f2-4f63-9dca-424807e07ebf\") " pod="calico-system/goldmane-666569f655-c5bfl" Nov 6 00:35:45.303496 kubelet[2824]: I1106 00:35:45.303507 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2s9z\" (UniqueName: \"kubernetes.io/projected/cb6ef055-21f2-4f63-9dca-424807e07ebf-kube-api-access-f2s9z\") pod \"goldmane-666569f655-c5bfl\" (UID: \"cb6ef055-21f2-4f63-9dca-424807e07ebf\") " pod="calico-system/goldmane-666569f655-c5bfl" Nov 6 00:35:45.404542 kubelet[2824]: I1106 00:35:45.404466 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/006a087d-1905-41d2-83ba-5643fdd121c4-config-volume\") pod \"coredns-674b8bbfcf-msnpp\" (UID: \"006a087d-1905-41d2-83ba-5643fdd121c4\") " pod="kube-system/coredns-674b8bbfcf-msnpp" Nov 6 00:35:45.404542 kubelet[2824]: I1106 00:35:45.404527 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jhwr\" (UniqueName: \"kubernetes.io/projected/006a087d-1905-41d2-83ba-5643fdd121c4-kube-api-access-8jhwr\") pod \"coredns-674b8bbfcf-msnpp\" (UID: \"006a087d-1905-41d2-83ba-5643fdd121c4\") " pod="kube-system/coredns-674b8bbfcf-msnpp" Nov 6 00:35:45.409471 containerd[1634]: time="2025-11-06T00:35:45.409420934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-2h5ss,Uid:37bfba89-7ef7-48f7-8ad4-1de208225932,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:35:45.521470 systemd[1]: Created slice kubepods-burstable-pod8aef6d9f_1e85_431a_9981_150f9bb87c5d.slice - libcontainer container kubepods-burstable-pod8aef6d9f_1e85_431a_9981_150f9bb87c5d.slice. Nov 6 00:35:45.605812 kubelet[2824]: I1106 00:35:45.605764 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8aef6d9f-1e85-431a-9981-150f9bb87c5d-config-volume\") pod \"coredns-674b8bbfcf-mj9bx\" (UID: \"8aef6d9f-1e85-431a-9981-150f9bb87c5d\") " pod="kube-system/coredns-674b8bbfcf-mj9bx" Nov 6 00:35:45.605812 kubelet[2824]: I1106 00:35:45.605802 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hsb\" (UniqueName: \"kubernetes.io/projected/8aef6d9f-1e85-431a-9981-150f9bb87c5d-kube-api-access-w2hsb\") pod \"coredns-674b8bbfcf-mj9bx\" (UID: \"8aef6d9f-1e85-431a-9981-150f9bb87c5d\") " pod="kube-system/coredns-674b8bbfcf-mj9bx" Nov 6 00:35:45.721256 containerd[1634]: time="2025-11-06T00:35:45.721192479Z" level=error msg="Failed to destroy network for sandbox \"7bdd0e33b6b29e1a27d8c8154dc445616f482037704a8cc61999de1dc5446769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:45.782339 containerd[1634]: time="2025-11-06T00:35:45.782179101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c5bfl,Uid:cb6ef055-21f2-4f63-9dca-424807e07ebf,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:45.825538 kubelet[2824]: E1106 00:35:45.825470 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:45.826380 containerd[1634]: time="2025-11-06T00:35:45.826334225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mj9bx,Uid:8aef6d9f-1e85-431a-9981-150f9bb87c5d,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:45.887256 kubelet[2824]: E1106 00:35:45.887204 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:45.887929 containerd[1634]: time="2025-11-06T00:35:45.887878524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msnpp,Uid:006a087d-1905-41d2-83ba-5643fdd121c4,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:45.902081 containerd[1634]: time="2025-11-06T00:35:45.902001047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-dskgb,Uid:58f56521-b0ee-46b1-8476-68ff3e34496b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bdd0e33b6b29e1a27d8c8154dc445616f482037704a8cc61999de1dc5446769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:45.902606 kubelet[2824]: E1106 00:35:45.902540 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bdd0e33b6b29e1a27d8c8154dc445616f482037704a8cc61999de1dc5446769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:45.902752 kubelet[2824]: E1106 00:35:45.902658 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bdd0e33b6b29e1a27d8c8154dc445616f482037704a8cc61999de1dc5446769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" Nov 6 00:35:45.902752 kubelet[2824]: E1106 00:35:45.902691 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bdd0e33b6b29e1a27d8c8154dc445616f482037704a8cc61999de1dc5446769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" Nov 6 00:35:45.902868 kubelet[2824]: E1106 00:35:45.902774 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647c87d985-dskgb_calico-apiserver(58f56521-b0ee-46b1-8476-68ff3e34496b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647c87d985-dskgb_calico-apiserver(58f56521-b0ee-46b1-8476-68ff3e34496b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bdd0e33b6b29e1a27d8c8154dc445616f482037704a8cc61999de1dc5446769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:35:46.021150 containerd[1634]: time="2025-11-06T00:35:46.021069229Z" level=error msg="Failed to destroy network for sandbox \"ee600cf40b486406f9767a584b2c5196de0b090a6e98a5fe13878039343496d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.025233 systemd[1]: run-netns-cni\x2dd250a0f0\x2d0f59\x2dec29\x2dff17\x2df1c734a17b02.mount: Deactivated successfully. Nov 6 00:35:46.034412 containerd[1634]: time="2025-11-06T00:35:46.033752402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d877dd957-pl5hq,Uid:92f7c50a-0661-49a3-b7e2-4ee539768f1e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee600cf40b486406f9767a584b2c5196de0b090a6e98a5fe13878039343496d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.035657 kubelet[2824]: E1106 00:35:46.034914 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee600cf40b486406f9767a584b2c5196de0b090a6e98a5fe13878039343496d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.035657 kubelet[2824]: E1106 00:35:46.035309 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee600cf40b486406f9767a584b2c5196de0b090a6e98a5fe13878039343496d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d877dd957-pl5hq" Nov 6 00:35:46.035657 kubelet[2824]: E1106 00:35:46.035343 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee600cf40b486406f9767a584b2c5196de0b090a6e98a5fe13878039343496d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d877dd957-pl5hq" Nov 6 00:35:46.037009 kubelet[2824]: E1106 00:35:46.036217 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d877dd957-pl5hq_calico-system(92f7c50a-0661-49a3-b7e2-4ee539768f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d877dd957-pl5hq_calico-system(92f7c50a-0661-49a3-b7e2-4ee539768f1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee600cf40b486406f9767a584b2c5196de0b090a6e98a5fe13878039343496d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d877dd957-pl5hq" podUID="92f7c50a-0661-49a3-b7e2-4ee539768f1e" Nov 6 00:35:46.042744 containerd[1634]: time="2025-11-06T00:35:46.042325004Z" level=error msg="Failed to destroy network for sandbox \"c9c92aa1e3ecf26172beec015a1013925043c69d535db5a15607ed1323bb5f93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.046297 systemd[1]: run-netns-cni\x2dec5e562e\x2d8009\x2d3710\x2d8257\x2d3e32474c42f2.mount: Deactivated successfully. Nov 6 00:35:46.049211 containerd[1634]: time="2025-11-06T00:35:46.049154806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c5bfl,Uid:cb6ef055-21f2-4f63-9dca-424807e07ebf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c92aa1e3ecf26172beec015a1013925043c69d535db5a15607ed1323bb5f93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.050654 kubelet[2824]: E1106 00:35:46.050313 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c92aa1e3ecf26172beec015a1013925043c69d535db5a15607ed1323bb5f93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.050654 kubelet[2824]: E1106 00:35:46.050436 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c92aa1e3ecf26172beec015a1013925043c69d535db5a15607ed1323bb5f93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c5bfl" Nov 6 00:35:46.050654 kubelet[2824]: E1106 00:35:46.050464 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c92aa1e3ecf26172beec015a1013925043c69d535db5a15607ed1323bb5f93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c5bfl" Nov 6 00:35:46.051558 kubelet[2824]: E1106 00:35:46.050540 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-c5bfl_calico-system(cb6ef055-21f2-4f63-9dca-424807e07ebf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-c5bfl_calico-system(cb6ef055-21f2-4f63-9dca-424807e07ebf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9c92aa1e3ecf26172beec015a1013925043c69d535db5a15607ed1323bb5f93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:35:46.053276 containerd[1634]: time="2025-11-06T00:35:46.053246162Z" level=error msg="Failed to destroy network for sandbox \"35dc184844a9eea1d40dc279ee4b24bc0e67e41dcd1fbe5b8d1369432d77180b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.055109 containerd[1634]: time="2025-11-06T00:35:46.054910354Z" level=error msg="Failed to destroy network for sandbox \"4fe28325dc0e5ef222ea26cdde232d5af9816f0bc800834d0f6650e36d8389bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.058995 systemd[1]: run-netns-cni\x2d586f6728\x2dcfc2\x2dce58\x2d10a2\x2d58b89b56a813.mount: Deactivated successfully. Nov 6 00:35:46.059103 systemd[1]: run-netns-cni\x2dd8fdddff\x2d160b\x2dc2c6\x2d3b45\x2d370a22b13a09.mount: Deactivated successfully. Nov 6 00:35:46.062447 containerd[1634]: time="2025-11-06T00:35:46.062409742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ddbfb7b-znt4c,Uid:30a2a173-30b9-41b2-8ef6-9137cb1fe89a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc184844a9eea1d40dc279ee4b24bc0e67e41dcd1fbe5b8d1369432d77180b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.062905 kubelet[2824]: E1106 00:35:46.062869 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc184844a9eea1d40dc279ee4b24bc0e67e41dcd1fbe5b8d1369432d77180b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.063045 kubelet[2824]: E1106 00:35:46.063027 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc184844a9eea1d40dc279ee4b24bc0e67e41dcd1fbe5b8d1369432d77180b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" Nov 6 00:35:46.063130 kubelet[2824]: E1106 00:35:46.063114 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc184844a9eea1d40dc279ee4b24bc0e67e41dcd1fbe5b8d1369432d77180b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" Nov 6 00:35:46.063249 kubelet[2824]: E1106 00:35:46.063222 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35dc184844a9eea1d40dc279ee4b24bc0e67e41dcd1fbe5b8d1369432d77180b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:35:46.066411 containerd[1634]: time="2025-11-06T00:35:46.066364762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-2h5ss,Uid:37bfba89-7ef7-48f7-8ad4-1de208225932,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe28325dc0e5ef222ea26cdde232d5af9816f0bc800834d0f6650e36d8389bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.066553 kubelet[2824]: E1106 00:35:46.066523 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe28325dc0e5ef222ea26cdde232d5af9816f0bc800834d0f6650e36d8389bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.066618 kubelet[2824]: E1106 00:35:46.066563 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe28325dc0e5ef222ea26cdde232d5af9816f0bc800834d0f6650e36d8389bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" Nov 6 00:35:46.066618 kubelet[2824]: E1106 00:35:46.066582 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe28325dc0e5ef222ea26cdde232d5af9816f0bc800834d0f6650e36d8389bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" Nov 6 00:35:46.067445 containerd[1634]: time="2025-11-06T00:35:46.067408299Z" level=error msg="Failed to destroy network for sandbox \"aa2440688c4ad5ae1eaaceb1070797c21d96575a60398fe0c9969bd101eba119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.067727 kubelet[2824]: E1106 00:35:46.066626 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647c87d985-2h5ss_calico-apiserver(37bfba89-7ef7-48f7-8ad4-1de208225932)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647c87d985-2h5ss_calico-apiserver(37bfba89-7ef7-48f7-8ad4-1de208225932)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fe28325dc0e5ef222ea26cdde232d5af9816f0bc800834d0f6650e36d8389bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:35:46.070744 systemd[1]: run-netns-cni\x2d93450d49\x2db61c\x2d40af\x2df872\x2d51adcdeef8aa.mount: Deactivated successfully. Nov 6 00:35:46.073433 containerd[1634]: time="2025-11-06T00:35:46.073393608Z" level=error msg="Failed to destroy network for sandbox \"30f5418cf311cee4a7899a3f13346877bdb419c68fd31067f94ba8b996f64784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.073659 containerd[1634]: time="2025-11-06T00:35:46.073591819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msnpp,Uid:006a087d-1905-41d2-83ba-5643fdd121c4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2440688c4ad5ae1eaaceb1070797c21d96575a60398fe0c9969bd101eba119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.073866 kubelet[2824]: E1106 00:35:46.073833 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2440688c4ad5ae1eaaceb1070797c21d96575a60398fe0c9969bd101eba119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.073944 kubelet[2824]: E1106 00:35:46.073883 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2440688c4ad5ae1eaaceb1070797c21d96575a60398fe0c9969bd101eba119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-msnpp" Nov 6 00:35:46.073944 kubelet[2824]: E1106 00:35:46.073901 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2440688c4ad5ae1eaaceb1070797c21d96575a60398fe0c9969bd101eba119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-msnpp" Nov 6 00:35:46.074017 kubelet[2824]: E1106 00:35:46.073944 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-msnpp_kube-system(006a087d-1905-41d2-83ba-5643fdd121c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-msnpp_kube-system(006a087d-1905-41d2-83ba-5643fdd121c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa2440688c4ad5ae1eaaceb1070797c21d96575a60398fe0c9969bd101eba119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-msnpp" podUID="006a087d-1905-41d2-83ba-5643fdd121c4" Nov 6 00:35:46.076125 containerd[1634]: time="2025-11-06T00:35:46.076041956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mj9bx,Uid:8aef6d9f-1e85-431a-9981-150f9bb87c5d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5418cf311cee4a7899a3f13346877bdb419c68fd31067f94ba8b996f64784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.076251 kubelet[2824]: E1106 00:35:46.076213 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5418cf311cee4a7899a3f13346877bdb419c68fd31067f94ba8b996f64784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:46.076316 kubelet[2824]: E1106 00:35:46.076262 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5418cf311cee4a7899a3f13346877bdb419c68fd31067f94ba8b996f64784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mj9bx" Nov 6 00:35:46.076316 kubelet[2824]: E1106 00:35:46.076282 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5418cf311cee4a7899a3f13346877bdb419c68fd31067f94ba8b996f64784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mj9bx" Nov 6 00:35:46.076385 kubelet[2824]: E1106 00:35:46.076331 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mj9bx_kube-system(8aef6d9f-1e85-431a-9981-150f9bb87c5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mj9bx_kube-system(8aef6d9f-1e85-431a-9981-150f9bb87c5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f5418cf311cee4a7899a3f13346877bdb419c68fd31067f94ba8b996f64784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mj9bx" podUID="8aef6d9f-1e85-431a-9981-150f9bb87c5d" Nov 6 00:35:47.007827 systemd[1]: run-netns-cni\x2d4805811e\x2d6875\x2dbe40\x2d8003\x2defe184333da7.mount: Deactivated successfully. Nov 6 00:35:55.207668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105391721.mount: Deactivated successfully. Nov 6 00:35:56.997580 containerd[1634]: time="2025-11-06T00:35:56.997452102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:57.004701 containerd[1634]: time="2025-11-06T00:35:57.004361572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:35:57.009378 containerd[1634]: time="2025-11-06T00:35:57.009327277Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:57.012566 containerd[1634]: time="2025-11-06T00:35:57.012478616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:35:57.013341 containerd[1634]: time="2025-11-06T00:35:57.013295750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.429230398s" Nov 6 00:35:57.013341 containerd[1634]: time="2025-11-06T00:35:57.013329784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:35:57.048912 containerd[1634]: time="2025-11-06T00:35:57.048838315Z" level=info msg="CreateContainer within sandbox \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:35:57.319370 containerd[1634]: time="2025-11-06T00:35:57.318022522Z" level=info msg="Container 65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:35:57.331066 containerd[1634]: time="2025-11-06T00:35:57.331021784Z" level=info msg="CreateContainer within sandbox \"5e25943d57060e9afb9cfcf5e5494ad12650796a45acaca04a58143a0907af90\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\"" Nov 6 00:35:57.331681 containerd[1634]: time="2025-11-06T00:35:57.331603656Z" level=info msg="StartContainer for \"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\"" Nov 6 00:35:57.333020 containerd[1634]: time="2025-11-06T00:35:57.332993393Z" level=info msg="connecting to shim 65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99" address="unix:///run/containerd/s/af27c412662fafe7c8011099e85052605dff1e97b4c03bed69cd3c035903803a" protocol=ttrpc version=3 Nov 6 00:35:57.366769 systemd[1]: Started cri-containerd-65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99.scope - libcontainer container 65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99. Nov 6 00:35:57.401368 containerd[1634]: time="2025-11-06T00:35:57.401321833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xp8pl,Uid:299ba27c-7f4c-4b4c-bf27-d7e11dc57242,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:57.401816 containerd[1634]: time="2025-11-06T00:35:57.401797024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ddbfb7b-znt4c,Uid:30a2a173-30b9-41b2-8ef6-9137cb1fe89a,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:57.436821 containerd[1634]: time="2025-11-06T00:35:57.435576944Z" level=info msg="StartContainer for \"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\" returns successfully" Nov 6 00:35:57.469040 containerd[1634]: time="2025-11-06T00:35:57.468882433Z" level=error msg="Failed to destroy network for sandbox \"63be960cc9c09ff1712fe39af8fdf12f02f05907a9946f1e7b8d967777489c5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:57.474256 containerd[1634]: time="2025-11-06T00:35:57.474188356Z" level=error msg="Failed to destroy network for sandbox \"ec9a09413908b72a9d9f9d176d86e7f57c9e69e7b1f3e70f621632af3c0e6352\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:57.546597 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:35:57.692250 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:35:57.693563 containerd[1634]: time="2025-11-06T00:35:57.693499378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ddbfb7b-znt4c,Uid:30a2a173-30b9-41b2-8ef6-9137cb1fe89a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63be960cc9c09ff1712fe39af8fdf12f02f05907a9946f1e7b8d967777489c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:57.701521 kubelet[2824]: E1106 00:35:57.701472 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:57.710265 kubelet[2824]: E1106 00:35:57.710059 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63be960cc9c09ff1712fe39af8fdf12f02f05907a9946f1e7b8d967777489c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:57.710265 kubelet[2824]: E1106 00:35:57.710141 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63be960cc9c09ff1712fe39af8fdf12f02f05907a9946f1e7b8d967777489c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" Nov 6 00:35:57.710265 kubelet[2824]: E1106 00:35:57.710164 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63be960cc9c09ff1712fe39af8fdf12f02f05907a9946f1e7b8d967777489c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" Nov 6 00:35:57.710519 kubelet[2824]: E1106 00:35:57.710212 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63be960cc9c09ff1712fe39af8fdf12f02f05907a9946f1e7b8d967777489c5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:35:57.822210 containerd[1634]: time="2025-11-06T00:35:57.822138180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xp8pl,Uid:299ba27c-7f4c-4b4c-bf27-d7e11dc57242,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9a09413908b72a9d9f9d176d86e7f57c9e69e7b1f3e70f621632af3c0e6352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:57.822821 kubelet[2824]: E1106 00:35:57.822453 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9a09413908b72a9d9f9d176d86e7f57c9e69e7b1f3e70f621632af3c0e6352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:35:57.822821 kubelet[2824]: E1106 00:35:57.822514 2824 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9a09413908b72a9d9f9d176d86e7f57c9e69e7b1f3e70f621632af3c0e6352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:57.822821 kubelet[2824]: E1106 00:35:57.822542 2824 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec9a09413908b72a9d9f9d176d86e7f57c9e69e7b1f3e70f621632af3c0e6352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xp8pl" Nov 6 00:35:57.823072 kubelet[2824]: E1106 00:35:57.822601 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec9a09413908b72a9d9f9d176d86e7f57c9e69e7b1f3e70f621632af3c0e6352\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:35:57.836917 kubelet[2824]: I1106 00:35:57.836849 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-74zgq" podStartSLOduration=1.6564541130000001 podStartE2EDuration="28.836832111s" podCreationTimestamp="2025-11-06 00:35:29 +0000 UTC" firstStartedPulling="2025-11-06 00:35:29.834105379 +0000 UTC m=+19.741090445" lastFinishedPulling="2025-11-06 00:35:57.014483377 +0000 UTC m=+46.921468443" observedRunningTime="2025-11-06 00:35:57.777299208 +0000 UTC m=+47.684284274" watchObservedRunningTime="2025-11-06 00:35:57.836832111 +0000 UTC m=+47.743817177" Nov 6 00:35:57.955652 containerd[1634]: time="2025-11-06T00:35:57.955592123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\" id:\"a257f5b46a0f0ec9766424c0b666686e5a6bfb843b6c872ae690b313640cd94f\" pid:4006 exit_status:1 exited_at:{seconds:1762389357 nanos:955061909}" Nov 6 00:35:57.987144 kubelet[2824]: I1106 00:35:57.987073 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-backend-key-pair\") pod \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\" (UID: \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\") " Nov 6 00:35:57.987144 kubelet[2824]: I1106 00:35:57.987113 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-ca-bundle\") pod \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\" (UID: \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\") " Nov 6 00:35:57.987144 kubelet[2824]: I1106 00:35:57.987136 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7845\" (UniqueName: \"kubernetes.io/projected/92f7c50a-0661-49a3-b7e2-4ee539768f1e-kube-api-access-h7845\") pod \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\" (UID: \"92f7c50a-0661-49a3-b7e2-4ee539768f1e\") " Nov 6 00:35:57.988080 kubelet[2824]: I1106 00:35:57.988005 2824 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "92f7c50a-0661-49a3-b7e2-4ee539768f1e" (UID: "92f7c50a-0661-49a3-b7e2-4ee539768f1e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:35:57.991970 kubelet[2824]: I1106 00:35:57.991738 2824 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "92f7c50a-0661-49a3-b7e2-4ee539768f1e" (UID: "92f7c50a-0661-49a3-b7e2-4ee539768f1e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:35:57.992786 kubelet[2824]: I1106 00:35:57.992753 2824 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f7c50a-0661-49a3-b7e2-4ee539768f1e-kube-api-access-h7845" (OuterVolumeSpecName: "kube-api-access-h7845") pod "92f7c50a-0661-49a3-b7e2-4ee539768f1e" (UID: "92f7c50a-0661-49a3-b7e2-4ee539768f1e"). InnerVolumeSpecName "kube-api-access-h7845". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:35:58.020628 systemd[1]: var-lib-kubelet-pods-92f7c50a\x2d0661\x2d49a3\x2db7e2\x2d4ee539768f1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7845.mount: Deactivated successfully. Nov 6 00:35:58.020799 systemd[1]: var-lib-kubelet-pods-92f7c50a\x2d0661\x2d49a3\x2db7e2\x2d4ee539768f1e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:35:58.088204 kubelet[2824]: I1106 00:35:58.088137 2824 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 6 00:35:58.088204 kubelet[2824]: I1106 00:35:58.088189 2824 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92f7c50a-0661-49a3-b7e2-4ee539768f1e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 6 00:35:58.088204 kubelet[2824]: I1106 00:35:58.088200 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7845\" (UniqueName: \"kubernetes.io/projected/92f7c50a-0661-49a3-b7e2-4ee539768f1e-kube-api-access-h7845\") on node \"localhost\" DevicePath \"\"" Nov 6 00:35:58.401374 containerd[1634]: time="2025-11-06T00:35:58.401171861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c5bfl,Uid:cb6ef055-21f2-4f63-9dca-424807e07ebf,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:58.403047 containerd[1634]: time="2025-11-06T00:35:58.401224440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-dskgb,Uid:58f56521-b0ee-46b1-8476-68ff3e34496b,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:35:58.403047 containerd[1634]: time="2025-11-06T00:35:58.401171791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-2h5ss,Uid:37bfba89-7ef7-48f7-8ad4-1de208225932,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:35:58.413558 systemd[1]: Removed slice kubepods-besteffort-pod92f7c50a_0661_49a3_b7e2_4ee539768f1e.slice - libcontainer container kubepods-besteffort-pod92f7c50a_0661_49a3_b7e2_4ee539768f1e.slice. Nov 6 00:35:58.714915 kubelet[2824]: E1106 00:35:58.714230 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:58.981578 systemd[1]: Created slice kubepods-besteffort-pod48c54e02_677b_4620_9d87_c389d0553835.slice - libcontainer container kubepods-besteffort-pod48c54e02_677b_4620_9d87_c389d0553835.slice. Nov 6 00:35:59.025415 kubelet[2824]: I1106 00:35:59.025027 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48c54e02-677b-4620-9d87-c389d0553835-whisker-backend-key-pair\") pod \"whisker-647cc965d5-59nsd\" (UID: \"48c54e02-677b-4620-9d87-c389d0553835\") " pod="calico-system/whisker-647cc965d5-59nsd" Nov 6 00:35:59.025415 kubelet[2824]: I1106 00:35:59.025139 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48c54e02-677b-4620-9d87-c389d0553835-whisker-ca-bundle\") pod \"whisker-647cc965d5-59nsd\" (UID: \"48c54e02-677b-4620-9d87-c389d0553835\") " pod="calico-system/whisker-647cc965d5-59nsd" Nov 6 00:35:59.025415 kubelet[2824]: I1106 00:35:59.025168 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg62n\" (UniqueName: \"kubernetes.io/projected/48c54e02-677b-4620-9d87-c389d0553835-kube-api-access-hg62n\") pod \"whisker-647cc965d5-59nsd\" (UID: \"48c54e02-677b-4620-9d87-c389d0553835\") " pod="calico-system/whisker-647cc965d5-59nsd" Nov 6 00:35:59.236178 containerd[1634]: time="2025-11-06T00:35:59.234964450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\" id:\"b99fe7d8a215681abf107e86fa8fb11000af8c438558859ff389b46191169aef\" pid:4091 exit_status:1 exited_at:{seconds:1762389359 nanos:233930130}" Nov 6 00:35:59.300598 containerd[1634]: time="2025-11-06T00:35:59.300488519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647cc965d5-59nsd,Uid:48c54e02-677b-4620-9d87-c389d0553835,Namespace:calico-system,Attempt:0,}" Nov 6 00:35:59.302886 systemd-networkd[1529]: cali14475db6a38: Link UP Nov 6 00:35:59.306238 systemd-networkd[1529]: cali14475db6a38: Gained carrier Nov 6 00:35:59.403985 kubelet[2824]: E1106 00:35:59.402277 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:35:59.410222 containerd[1634]: time="2025-11-06T00:35:59.408995736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msnpp,Uid:006a087d-1905-41d2-83ba-5643fdd121c4,Namespace:kube-system,Attempt:0,}" Nov 6 00:35:59.432607 containerd[1634]: 2025-11-06 00:35:58.632 [INFO][4054] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:35:59.432607 containerd[1634]: 2025-11-06 00:35:58.745 [INFO][4054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0 calico-apiserver-647c87d985- calico-apiserver 37bfba89-7ef7-48f7-8ad4-1de208225932 881 0 2025-11-06 00:35:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647c87d985 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647c87d985-2h5ss eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14475db6a38 [] [] }} ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-" Nov 6 00:35:59.432607 containerd[1634]: 2025-11-06 00:35:58.746 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.432607 containerd[1634]: 2025-11-06 00:35:59.117 [INFO][4087] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" HandleID="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Workload="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.117 [INFO][4087] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" HandleID="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Workload="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-647c87d985-2h5ss", "timestamp":"2025-11-06 00:35:59.11756364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.117 [INFO][4087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.122 [INFO][4087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.124 [INFO][4087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.180 [INFO][4087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" host="localhost" Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.202 [INFO][4087] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.222 [INFO][4087] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.231 [INFO][4087] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.238 [INFO][4087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.433027 containerd[1634]: 2025-11-06 00:35:59.238 [INFO][4087] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" host="localhost" Nov 6 00:35:59.434123 containerd[1634]: 2025-11-06 00:35:59.241 [INFO][4087] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54 Nov 6 00:35:59.434123 containerd[1634]: 2025-11-06 00:35:59.258 [INFO][4087] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" host="localhost" Nov 6 00:35:59.434123 containerd[1634]: 2025-11-06 00:35:59.271 [INFO][4087] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" host="localhost" Nov 6 00:35:59.434123 containerd[1634]: 2025-11-06 00:35:59.271 [INFO][4087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" host="localhost" Nov 6 00:35:59.434123 containerd[1634]: 2025-11-06 00:35:59.272 [INFO][4087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:35:59.434123 containerd[1634]: 2025-11-06 00:35:59.272 [INFO][4087] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" HandleID="k8s-pod-network.5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Workload="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.434321 containerd[1634]: 2025-11-06 00:35:59.281 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0", GenerateName:"calico-apiserver-647c87d985-", Namespace:"calico-apiserver", SelfLink:"", UID:"37bfba89-7ef7-48f7-8ad4-1de208225932", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c87d985", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647c87d985-2h5ss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14475db6a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.434408 containerd[1634]: 2025-11-06 00:35:59.284 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.434408 containerd[1634]: 2025-11-06 00:35:59.285 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14475db6a38 ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.434408 containerd[1634]: 2025-11-06 00:35:59.300 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.434515 containerd[1634]: 2025-11-06 00:35:59.301 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0", GenerateName:"calico-apiserver-647c87d985-", Namespace:"calico-apiserver", SelfLink:"", UID:"37bfba89-7ef7-48f7-8ad4-1de208225932", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c87d985", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54", Pod:"calico-apiserver-647c87d985-2h5ss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14475db6a38", MAC:"ba:05:fb:87:5f:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.434592 containerd[1634]: 2025-11-06 00:35:59.398 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-2h5ss" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--2h5ss-eth0" Nov 6 00:35:59.488627 systemd-networkd[1529]: cali1d33e3b027a: Link UP Nov 6 00:35:59.492572 systemd-networkd[1529]: cali1d33e3b027a: Gained carrier Nov 6 00:35:59.549011 containerd[1634]: 2025-11-06 00:35:58.640 [INFO][4044] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:35:59.549011 containerd[1634]: 2025-11-06 00:35:58.747 [INFO][4044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0 calico-apiserver-647c87d985- calico-apiserver 58f56521-b0ee-46b1-8476-68ff3e34496b 883 0 2025-11-06 00:35:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647c87d985 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647c87d985-dskgb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d33e3b027a [] [] }} ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-" Nov 6 00:35:59.549011 containerd[1634]: 2025-11-06 00:35:58.747 [INFO][4044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.549011 containerd[1634]: 2025-11-06 00:35:59.112 [INFO][4089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" HandleID="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Workload="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.118 [INFO][4089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" HandleID="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Workload="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-647c87d985-dskgb", "timestamp":"2025-11-06 00:35:59.112746205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.119 [INFO][4089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.272 [INFO][4089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.272 [INFO][4089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.291 [INFO][4089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" host="localhost" Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.309 [INFO][4089] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.328 [INFO][4089] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.372 [INFO][4089] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.405 [INFO][4089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.549419 containerd[1634]: 2025-11-06 00:35:59.405 [INFO][4089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" host="localhost" Nov 6 00:35:59.550228 containerd[1634]: 2025-11-06 00:35:59.416 [INFO][4089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d Nov 6 00:35:59.550228 containerd[1634]: 2025-11-06 00:35:59.433 [INFO][4089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" host="localhost" Nov 6 00:35:59.550228 containerd[1634]: 2025-11-06 00:35:59.454 [INFO][4089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" host="localhost" Nov 6 00:35:59.550228 containerd[1634]: 2025-11-06 00:35:59.454 [INFO][4089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" host="localhost" Nov 6 00:35:59.550228 containerd[1634]: 2025-11-06 00:35:59.454 [INFO][4089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:35:59.550228 containerd[1634]: 2025-11-06 00:35:59.454 [INFO][4089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" HandleID="k8s-pod-network.192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Workload="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.550441 containerd[1634]: 2025-11-06 00:35:59.470 [INFO][4044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0", GenerateName:"calico-apiserver-647c87d985-", Namespace:"calico-apiserver", SelfLink:"", UID:"58f56521-b0ee-46b1-8476-68ff3e34496b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c87d985", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647c87d985-dskgb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d33e3b027a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.550531 containerd[1634]: 2025-11-06 00:35:59.471 [INFO][4044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.550531 containerd[1634]: 2025-11-06 00:35:59.471 [INFO][4044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d33e3b027a ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.550531 containerd[1634]: 2025-11-06 00:35:59.496 [INFO][4044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.552867 containerd[1634]: 2025-11-06 00:35:59.510 [INFO][4044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0", GenerateName:"calico-apiserver-647c87d985-", Namespace:"calico-apiserver", SelfLink:"", UID:"58f56521-b0ee-46b1-8476-68ff3e34496b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647c87d985", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d", Pod:"calico-apiserver-647c87d985-dskgb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d33e3b027a", MAC:"1e:4a:e6:b0:11:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.552991 containerd[1634]: 2025-11-06 00:35:59.538 [INFO][4044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" Namespace="calico-apiserver" Pod="calico-apiserver-647c87d985-dskgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--647c87d985--dskgb-eth0" Nov 6 00:35:59.652131 systemd-networkd[1529]: calid6e60dd30c7: Link UP Nov 6 00:35:59.653781 systemd-networkd[1529]: calid6e60dd30c7: Gained carrier Nov 6 00:35:59.685987 containerd[1634]: time="2025-11-06T00:35:59.685925051Z" level=info msg="connecting to shim 5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54" address="unix:///run/containerd/s/854a13bff55eaa7788a65e552c3b2931c1ebd4a0a6194984ac091949047a4260" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:59.687410 containerd[1634]: time="2025-11-06T00:35:59.685925122Z" level=info msg="connecting to shim 192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d" address="unix:///run/containerd/s/f1e08fa50c08e71f57f7839d752d6a9bcd37825961236c5891dffd0c2275601c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:59.710510 containerd[1634]: 2025-11-06 00:35:58.620 [INFO][4032] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:35:59.710510 containerd[1634]: 2025-11-06 00:35:58.745 [INFO][4032] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--c5bfl-eth0 goldmane-666569f655- calico-system cb6ef055-21f2-4f63-9dca-424807e07ebf 885 0 2025-11-06 00:35:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-c5bfl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid6e60dd30c7 [] [] }} ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-" Nov 6 00:35:59.710510 containerd[1634]: 2025-11-06 00:35:58.746 [INFO][4032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.710510 containerd[1634]: 2025-11-06 00:35:59.112 [INFO][4084] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" HandleID="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Workload="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.119 [INFO][4084] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" HandleID="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Workload="localhost-k8s-goldmane--666569f655--c5bfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000193e10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-c5bfl", "timestamp":"2025-11-06 00:35:59.112381651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.119 [INFO][4084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.454 [INFO][4084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.455 [INFO][4084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.484 [INFO][4084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" host="localhost" Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.507 [INFO][4084] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.545 [INFO][4084] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.549 [INFO][4084] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.561 [INFO][4084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.710875 containerd[1634]: 2025-11-06 00:35:59.561 [INFO][4084] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" host="localhost" Nov 6 00:35:59.711250 containerd[1634]: 2025-11-06 00:35:59.569 [INFO][4084] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42 Nov 6 00:35:59.711250 containerd[1634]: 2025-11-06 00:35:59.584 [INFO][4084] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" host="localhost" Nov 6 00:35:59.711250 containerd[1634]: 2025-11-06 00:35:59.605 [INFO][4084] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" host="localhost" Nov 6 00:35:59.711250 containerd[1634]: 2025-11-06 00:35:59.605 [INFO][4084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" host="localhost" Nov 6 00:35:59.711250 containerd[1634]: 2025-11-06 00:35:59.606 [INFO][4084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:35:59.711250 containerd[1634]: 2025-11-06 00:35:59.606 [INFO][4084] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" HandleID="k8s-pod-network.3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Workload="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.711433 containerd[1634]: 2025-11-06 00:35:59.616 [INFO][4032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--c5bfl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cb6ef055-21f2-4f63-9dca-424807e07ebf", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-c5bfl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid6e60dd30c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.711433 containerd[1634]: 2025-11-06 00:35:59.616 [INFO][4032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.711513 containerd[1634]: 2025-11-06 00:35:59.616 [INFO][4032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6e60dd30c7 ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.711513 containerd[1634]: 2025-11-06 00:35:59.654 [INFO][4032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.711559 containerd[1634]: 2025-11-06 00:35:59.656 [INFO][4032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--c5bfl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cb6ef055-21f2-4f63-9dca-424807e07ebf", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42", Pod:"goldmane-666569f655-c5bfl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid6e60dd30c7", MAC:"a6:59:58:d5:c5:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.711609 containerd[1634]: 2025-11-06 00:35:59.700 [INFO][4032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" Namespace="calico-system" Pod="goldmane-666569f655-c5bfl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c5bfl-eth0" Nov 6 00:35:59.739960 systemd[1]: Started cri-containerd-192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d.scope - libcontainer container 192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d. Nov 6 00:35:59.757444 systemd[1]: Started cri-containerd-5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54.scope - libcontainer container 5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54. Nov 6 00:35:59.770540 containerd[1634]: time="2025-11-06T00:35:59.770221199Z" level=info msg="connecting to shim 3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42" address="unix:///run/containerd/s/309f2f47be3651782743cbfce613d9ff2c6ded5d05ad5f5f0bfdf53b326bb137" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:59.774825 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:35:59.787726 systemd-networkd[1529]: caliaa9625cef83: Link UP Nov 6 00:35:59.789451 systemd-networkd[1529]: caliaa9625cef83: Gained carrier Nov 6 00:35:59.814270 containerd[1634]: 2025-11-06 00:35:59.497 [INFO][4227] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:35:59.814270 containerd[1634]: 2025-11-06 00:35:59.538 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--647cc965d5--59nsd-eth0 whisker-647cc965d5- calico-system 48c54e02-677b-4620-9d87-c389d0553835 966 0 2025-11-06 00:35:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:647cc965d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-647cc965d5-59nsd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaa9625cef83 [] [] }} ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-" Nov 6 00:35:59.814270 containerd[1634]: 2025-11-06 00:35:59.538 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.814270 containerd[1634]: 2025-11-06 00:35:59.648 [INFO][4255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" HandleID="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Workload="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.648 [INFO][4255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" HandleID="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Workload="localhost-k8s-whisker--647cc965d5--59nsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502900), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-647cc965d5-59nsd", "timestamp":"2025-11-06 00:35:59.648045212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.648 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.648 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.648 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.673 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" host="localhost" Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.704 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.723 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.731 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.737 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:35:59.814625 containerd[1634]: 2025-11-06 00:35:59.738 [INFO][4255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" host="localhost" Nov 6 00:35:59.815022 containerd[1634]: 2025-11-06 00:35:59.744 [INFO][4255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4 Nov 6 00:35:59.815022 containerd[1634]: 2025-11-06 00:35:59.760 [INFO][4255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" host="localhost" Nov 6 00:35:59.815022 containerd[1634]: 2025-11-06 00:35:59.771 [INFO][4255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" host="localhost" Nov 6 00:35:59.815022 containerd[1634]: 2025-11-06 00:35:59.772 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" host="localhost" Nov 6 00:35:59.815022 containerd[1634]: 2025-11-06 00:35:59.772 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:35:59.815022 containerd[1634]: 2025-11-06 00:35:59.772 [INFO][4255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" HandleID="k8s-pod-network.b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Workload="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.815211 containerd[1634]: 2025-11-06 00:35:59.780 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--647cc965d5--59nsd-eth0", GenerateName:"whisker-647cc965d5-", Namespace:"calico-system", SelfLink:"", UID:"48c54e02-677b-4620-9d87-c389d0553835", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"647cc965d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-647cc965d5-59nsd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa9625cef83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.815211 containerd[1634]: 2025-11-06 00:35:59.780 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.815338 containerd[1634]: 2025-11-06 00:35:59.780 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa9625cef83 ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.815338 containerd[1634]: 2025-11-06 00:35:59.788 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.816756 containerd[1634]: 2025-11-06 00:35:59.790 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--647cc965d5--59nsd-eth0", GenerateName:"whisker-647cc965d5-", Namespace:"calico-system", SelfLink:"", UID:"48c54e02-677b-4620-9d87-c389d0553835", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"647cc965d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4", Pod:"whisker-647cc965d5-59nsd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa9625cef83", MAC:"2a:b2:59:0b:62:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:35:59.816835 containerd[1634]: 2025-11-06 00:35:59.807 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" Namespace="calico-system" Pod="whisker-647cc965d5-59nsd" WorkloadEndpoint="localhost-k8s-whisker--647cc965d5--59nsd-eth0" Nov 6 00:35:59.834104 systemd[1]: Started cri-containerd-3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42.scope - libcontainer container 3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42. Nov 6 00:35:59.847672 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:35:59.869840 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:35:59.922451 containerd[1634]: time="2025-11-06T00:35:59.922359372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-dskgb,Uid:58f56521-b0ee-46b1-8476-68ff3e34496b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"192436b48dc84b5f6042f305ca4f77ebf90919f40864460538b46e0233823d6d\"" Nov 6 00:35:59.931718 containerd[1634]: time="2025-11-06T00:35:59.930796406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:35:59.958472 containerd[1634]: time="2025-11-06T00:35:59.958342403Z" level=info msg="connecting to shim b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4" address="unix:///run/containerd/s/2386690f0dfc00793de811924457a25e3f4177dfd0b3c88d42a3e55f331ee9eb" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:35:59.962786 containerd[1634]: time="2025-11-06T00:35:59.962723200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647c87d985-2h5ss,Uid:37bfba89-7ef7-48f7-8ad4-1de208225932,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5df06a088ce57ee8155d954642a2cb2a337e72bcca5a36b133c2dc9422dcee54\"" Nov 6 00:35:59.988273 containerd[1634]: time="2025-11-06T00:35:59.988199022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c5bfl,Uid:cb6ef055-21f2-4f63-9dca-424807e07ebf,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b430176be65643233990a4d37c4b96543d736ac4a66df01260d5cd1c83b8c42\"" Nov 6 00:36:00.043112 systemd[1]: Started cri-containerd-b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4.scope - libcontainer container b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4. Nov 6 00:36:00.081491 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:36:00.217350 containerd[1634]: time="2025-11-06T00:36:00.213584898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647cc965d5-59nsd,Uid:48c54e02-677b-4620-9d87-c389d0553835,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5ccf9a206c35101f3bff83c41f50bc3c7160b494bf023dee555d9c638cb7cc4\"" Nov 6 00:36:00.245110 systemd-networkd[1529]: cali1a29abece1d: Link UP Nov 6 00:36:00.248857 systemd-networkd[1529]: cali1a29abece1d: Gained carrier Nov 6 00:36:00.308034 containerd[1634]: time="2025-11-06T00:36:00.307140401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:00.308526 containerd[1634]: 2025-11-06 00:35:59.907 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--msnpp-eth0 coredns-674b8bbfcf- kube-system 006a087d-1905-41d2-83ba-5643fdd121c4 890 0 2025-11-06 00:35:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-msnpp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a29abece1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-" Nov 6 00:36:00.308526 containerd[1634]: 2025-11-06 00:35:59.910 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.308526 containerd[1634]: 2025-11-06 00:36:00.011 [INFO][4456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" HandleID="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Workload="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.011 [INFO][4456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" HandleID="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Workload="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-msnpp", "timestamp":"2025-11-06 00:36:00.01127276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.011 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.012 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.012 [INFO][4456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.029 [INFO][4456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" host="localhost" Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.046 [INFO][4456] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.059 [INFO][4456] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.063 [INFO][4456] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.070 [INFO][4456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:00.308729 containerd[1634]: 2025-11-06 00:36:00.070 [INFO][4456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" host="localhost" Nov 6 00:36:00.309100 containerd[1634]: 2025-11-06 00:36:00.076 [INFO][4456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968 Nov 6 00:36:00.309100 containerd[1634]: 2025-11-06 00:36:00.155 [INFO][4456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" host="localhost" Nov 6 00:36:00.309100 containerd[1634]: 2025-11-06 00:36:00.222 [INFO][4456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" host="localhost" Nov 6 00:36:00.309100 containerd[1634]: 2025-11-06 00:36:00.223 [INFO][4456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" host="localhost" Nov 6 00:36:00.309100 containerd[1634]: 2025-11-06 00:36:00.223 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:36:00.309100 containerd[1634]: 2025-11-06 00:36:00.223 [INFO][4456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" HandleID="k8s-pod-network.48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Workload="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.309273 containerd[1634]: 2025-11-06 00:36:00.237 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msnpp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"006a087d-1905-41d2-83ba-5643fdd121c4", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-msnpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a29abece1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:00.309393 containerd[1634]: 2025-11-06 00:36:00.238 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.309393 containerd[1634]: 2025-11-06 00:36:00.238 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a29abece1d ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.309393 containerd[1634]: 2025-11-06 00:36:00.247 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.309503 containerd[1634]: 2025-11-06 00:36:00.251 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msnpp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"006a087d-1905-41d2-83ba-5643fdd121c4", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968", Pod:"coredns-674b8bbfcf-msnpp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a29abece1d", MAC:"5a:83:99:64:c7:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:00.309503 containerd[1634]: 2025-11-06 00:36:00.283 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" Namespace="kube-system" Pod="coredns-674b8bbfcf-msnpp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msnpp-eth0" Nov 6 00:36:00.316210 containerd[1634]: time="2025-11-06T00:36:00.316036887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:36:00.316210 containerd[1634]: time="2025-11-06T00:36:00.316156812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:00.316466 kubelet[2824]: E1106 00:36:00.316341 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:00.316466 kubelet[2824]: E1106 00:36:00.316406 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:00.319920 containerd[1634]: time="2025-11-06T00:36:00.319538915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:36:00.341858 kubelet[2824]: E1106 00:36:00.341726 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsb6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-647c87d985-dskgb_calico-apiserver(58f56521-b0ee-46b1-8476-68ff3e34496b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:00.346797 kubelet[2824]: E1106 00:36:00.346630 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:00.385979 containerd[1634]: time="2025-11-06T00:36:00.385564964Z" level=info msg="connecting to shim 48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968" address="unix:///run/containerd/s/d7882ee53c3437b37b67038c17a89b8331e08bf34078a493d101307cfe6ca634" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:36:00.407114 kubelet[2824]: E1106 00:36:00.407065 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:00.420426 containerd[1634]: time="2025-11-06T00:36:00.419804845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mj9bx,Uid:8aef6d9f-1e85-431a-9981-150f9bb87c5d,Namespace:kube-system,Attempt:0,}" Nov 6 00:36:00.426813 kubelet[2824]: I1106 00:36:00.426665 2824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f7c50a-0661-49a3-b7e2-4ee539768f1e" path="/var/lib/kubelet/pods/92f7c50a-0661-49a3-b7e2-4ee539768f1e/volumes" Nov 6 00:36:00.466163 systemd[1]: Started cri-containerd-48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968.scope - libcontainer container 48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968. Nov 6 00:36:00.483756 systemd-networkd[1529]: vxlan.calico: Link UP Nov 6 00:36:00.483774 systemd-networkd[1529]: vxlan.calico: Gained carrier Nov 6 00:36:00.506231 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:36:00.596783 containerd[1634]: time="2025-11-06T00:36:00.596280052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msnpp,Uid:006a087d-1905-41d2-83ba-5643fdd121c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968\"" Nov 6 00:36:00.605388 kubelet[2824]: E1106 00:36:00.604252 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:00.688409 containerd[1634]: time="2025-11-06T00:36:00.682893865Z" level=info msg="CreateContainer within sandbox \"48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:36:00.692070 containerd[1634]: time="2025-11-06T00:36:00.691997510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:00.696075 containerd[1634]: time="2025-11-06T00:36:00.696018432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:36:00.696302 containerd[1634]: time="2025-11-06T00:36:00.696265435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:00.700388 kubelet[2824]: E1106 00:36:00.698689 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:00.700388 kubelet[2824]: E1106 00:36:00.698766 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:00.700388 kubelet[2824]: E1106 00:36:00.699038 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spzrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-647c87d985-2h5ss_calico-apiserver(37bfba89-7ef7-48f7-8ad4-1de208225932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:00.700758 containerd[1634]: time="2025-11-06T00:36:00.700663624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:36:00.701005 kubelet[2824]: E1106 00:36:00.700919 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:00.730352 kubelet[2824]: E1106 00:36:00.730267 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:00.731948 kubelet[2824]: E1106 00:36:00.731890 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:00.747138 containerd[1634]: time="2025-11-06T00:36:00.747056329Z" level=info msg="Container 3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:36:00.764123 systemd-networkd[1529]: cali2b4c6791881: Link UP Nov 6 00:36:00.765840 systemd-networkd[1529]: cali2b4c6791881: Gained carrier Nov 6 00:36:00.802245 containerd[1634]: time="2025-11-06T00:36:00.801700740Z" level=info msg="CreateContainer within sandbox \"48304b0001997db60f710729b1218801ffa7439e58b83ceb6d12418ee3ce7968\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a\"" Nov 6 00:36:00.803137 containerd[1634]: time="2025-11-06T00:36:00.803085297Z" level=info msg="StartContainer for \"3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a\"" Nov 6 00:36:00.811387 containerd[1634]: time="2025-11-06T00:36:00.807452318Z" level=info msg="connecting to shim 3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a" address="unix:///run/containerd/s/d7882ee53c3437b37b67038c17a89b8331e08bf34078a493d101307cfe6ca634" protocol=ttrpc version=3 Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.526 [INFO][4565] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0 coredns-674b8bbfcf- kube-system 8aef6d9f-1e85-431a-9981-150f9bb87c5d 888 0 2025-11-06 00:35:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mj9bx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b4c6791881 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.526 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.598 [INFO][4599] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" HandleID="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Workload="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.598 [INFO][4599] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" HandleID="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Workload="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000401d20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mj9bx", "timestamp":"2025-11-06 00:36:00.598135743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.598 [INFO][4599] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.598 [INFO][4599] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.598 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.646 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.666 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.685 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.695 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.703 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.703 [INFO][4599] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.711 [INFO][4599] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.728 [INFO][4599] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.749 [INFO][4599] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.749 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" host="localhost" Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.749 [INFO][4599] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:36:00.867197 containerd[1634]: 2025-11-06 00:36:00.749 [INFO][4599] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" HandleID="k8s-pod-network.d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Workload="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.871571 containerd[1634]: 2025-11-06 00:36:00.756 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8aef6d9f-1e85-431a-9981-150f9bb87c5d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mj9bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4c6791881", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:00.871571 containerd[1634]: 2025-11-06 00:36:00.757 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.871571 containerd[1634]: 2025-11-06 00:36:00.757 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b4c6791881 ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.871571 containerd[1634]: 2025-11-06 00:36:00.766 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.871571 containerd[1634]: 2025-11-06 00:36:00.769 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8aef6d9f-1e85-431a-9981-150f9bb87c5d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb", Pod:"coredns-674b8bbfcf-mj9bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4c6791881", MAC:"de:bc:2d:7c:3c:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:00.871571 containerd[1634]: 2025-11-06 00:36:00.859 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-mj9bx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mj9bx-eth0" Nov 6 00:36:00.886012 systemd[1]: Started cri-containerd-3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a.scope - libcontainer container 3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a. Nov 6 00:36:00.957404 containerd[1634]: time="2025-11-06T00:36:00.956686081Z" level=info msg="connecting to shim d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb" address="unix:///run/containerd/s/5f965970dfee783f84085ad79d3b29f4e3d0b438411820c8a2ce0fe635ec6a47" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:36:00.978777 systemd-networkd[1529]: cali1d33e3b027a: Gained IPv6LL Nov 6 00:36:01.004169 systemd[1]: Started cri-containerd-d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb.scope - libcontainer container d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb. Nov 6 00:36:01.017520 containerd[1634]: time="2025-11-06T00:36:01.017455109Z" level=info msg="StartContainer for \"3d8cd1be2c116c321f368022609da93de7b6d796d32565f3f99f7dadeadef31a\" returns successfully" Nov 6 00:36:01.033238 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:36:01.106021 systemd-networkd[1529]: cali14475db6a38: Gained IPv6LL Nov 6 00:36:01.118828 containerd[1634]: time="2025-11-06T00:36:01.118378961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mj9bx,Uid:8aef6d9f-1e85-431a-9981-150f9bb87c5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb\"" Nov 6 00:36:01.128887 kubelet[2824]: E1106 00:36:01.128825 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:01.130139 containerd[1634]: time="2025-11-06T00:36:01.130039280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:01.133917 containerd[1634]: time="2025-11-06T00:36:01.133822637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:36:01.134079 containerd[1634]: time="2025-11-06T00:36:01.133936430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:01.134897 kubelet[2824]: E1106 00:36:01.134678 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:36:01.134897 kubelet[2824]: E1106 00:36:01.134862 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:36:01.135535 kubelet[2824]: E1106 00:36:01.135473 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2s9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c5bfl_calico-system(cb6ef055-21f2-4f63-9dca-424807e07ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:01.136617 containerd[1634]: time="2025-11-06T00:36:01.136192812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:36:01.138346 kubelet[2824]: E1106 00:36:01.138226 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:36:01.138777 containerd[1634]: time="2025-11-06T00:36:01.138739909Z" level=info msg="CreateContainer within sandbox \"d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:36:01.160044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586224444.mount: Deactivated successfully. Nov 6 00:36:01.166288 containerd[1634]: time="2025-11-06T00:36:01.165352023Z" level=info msg="Container 0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:36:01.193771 containerd[1634]: time="2025-11-06T00:36:01.193684272Z" level=info msg="CreateContainer within sandbox \"d26a47dfb88de55cbe4b5d9f002f59475c828ce380aeceed12839acbd669e7bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39\"" Nov 6 00:36:01.195923 containerd[1634]: time="2025-11-06T00:36:01.195325702Z" level=info msg="StartContainer for \"0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39\"" Nov 6 00:36:01.197683 containerd[1634]: time="2025-11-06T00:36:01.197129404Z" level=info msg="connecting to shim 0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39" address="unix:///run/containerd/s/5f965970dfee783f84085ad79d3b29f4e3d0b438411820c8a2ce0fe635ec6a47" protocol=ttrpc version=3 Nov 6 00:36:01.234949 systemd-networkd[1529]: calid6e60dd30c7: Gained IPv6LL Nov 6 00:36:01.236457 systemd[1]: Started cri-containerd-0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39.scope - libcontainer container 0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39. Nov 6 00:36:01.321932 containerd[1634]: time="2025-11-06T00:36:01.321743760Z" level=info msg="StartContainer for \"0ba5375857d217d022c0b667c55f2fe7c6b5bf4956c1188d288ed6614fc99d39\" returns successfully" Nov 6 00:36:01.501377 containerd[1634]: time="2025-11-06T00:36:01.501306314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:01.618016 systemd-networkd[1529]: vxlan.calico: Gained IPv6LL Nov 6 00:36:01.618688 containerd[1634]: time="2025-11-06T00:36:01.618123115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:36:01.618822 containerd[1634]: time="2025-11-06T00:36:01.618765390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:36:01.619780 kubelet[2824]: E1106 00:36:01.619583 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:36:01.619780 kubelet[2824]: E1106 00:36:01.619707 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:36:01.620251 kubelet[2824]: E1106 00:36:01.619923 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f694ace724154c5498c9b1960140a79f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg62n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647cc965d5-59nsd_calico-system(48c54e02-677b-4620-9d87-c389d0553835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:01.624274 containerd[1634]: time="2025-11-06T00:36:01.624223016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:36:01.683516 systemd-networkd[1529]: caliaa9625cef83: Gained IPv6LL Nov 6 00:36:01.737667 kubelet[2824]: E1106 00:36:01.737603 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:01.742781 kubelet[2824]: E1106 00:36:01.742397 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:01.743039 kubelet[2824]: E1106 00:36:01.742986 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:01.743549 kubelet[2824]: E1106 00:36:01.743504 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:01.743549 kubelet[2824]: E1106 00:36:01.743512 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:36:02.012092 containerd[1634]: time="2025-11-06T00:36:02.011982823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:02.067569 containerd[1634]: time="2025-11-06T00:36:02.067205077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:36:02.068005 containerd[1634]: time="2025-11-06T00:36:02.067477829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:36:02.068727 kubelet[2824]: E1106 00:36:02.068301 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:36:02.068727 kubelet[2824]: E1106 00:36:02.068368 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:36:02.068727 kubelet[2824]: E1106 00:36:02.068563 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg62n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647cc965d5-59nsd_calico-system(48c54e02-677b-4620-9d87-c389d0553835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:02.071727 kubelet[2824]: E1106 00:36:02.071607 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647cc965d5-59nsd" podUID="48c54e02-677b-4620-9d87-c389d0553835" Nov 6 00:36:02.258083 systemd-networkd[1529]: cali1a29abece1d: Gained IPv6LL Nov 6 00:36:02.308853 kubelet[2824]: I1106 00:36:02.307266 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mj9bx" podStartSLOduration=45.30722447 podStartE2EDuration="45.30722447s" podCreationTimestamp="2025-11-06 00:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:36:02.079885009 +0000 UTC m=+51.986870085" watchObservedRunningTime="2025-11-06 00:36:02.30722447 +0000 UTC m=+52.214209536" Nov 6 00:36:02.404631 kubelet[2824]: I1106 00:36:02.404225 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-msnpp" podStartSLOduration=45.404203291 podStartE2EDuration="45.404203291s" podCreationTimestamp="2025-11-06 00:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:36:02.30980466 +0000 UTC m=+52.216789726" watchObservedRunningTime="2025-11-06 00:36:02.404203291 +0000 UTC m=+52.311188357" Nov 6 00:36:02.641933 systemd-networkd[1529]: cali2b4c6791881: Gained IPv6LL Nov 6 00:36:02.749148 kubelet[2824]: E1106 00:36:02.746873 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:02.749148 kubelet[2824]: E1106 00:36:02.747411 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:02.749148 kubelet[2824]: E1106 00:36:02.748413 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647cc965d5-59nsd" podUID="48c54e02-677b-4620-9d87-c389d0553835" Nov 6 00:36:03.750305 kubelet[2824]: E1106 00:36:03.750224 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:03.752387 kubelet[2824]: E1106 00:36:03.752263 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:04.755794 kubelet[2824]: E1106 00:36:04.754597 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:05.090563 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:45236.service - OpenSSH per-connection server daemon (10.0.0.1:45236). Nov 6 00:36:05.297983 sshd[4806]: Accepted publickey for core from 10.0.0.1 port 45236 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:05.301784 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:05.326842 systemd-logind[1613]: New session 10 of user core. Nov 6 00:36:05.333955 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:36:05.683193 sshd[4809]: Connection closed by 10.0.0.1 port 45236 Nov 6 00:36:05.685654 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:05.700363 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:45236.service: Deactivated successfully. Nov 6 00:36:05.704456 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:36:05.708488 systemd-logind[1613]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:36:05.718257 systemd-logind[1613]: Removed session 10. Nov 6 00:36:09.402072 containerd[1634]: time="2025-11-06T00:36:09.401790155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ddbfb7b-znt4c,Uid:30a2a173-30b9-41b2-8ef6-9137cb1fe89a,Namespace:calico-system,Attempt:0,}" Nov 6 00:36:09.576398 systemd-networkd[1529]: cali91a868ce9d1: Link UP Nov 6 00:36:09.577421 systemd-networkd[1529]: cali91a868ce9d1: Gained carrier Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.465 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0 calico-kube-controllers-75ddbfb7b- calico-system 30a2a173-30b9-41b2-8ef6-9137cb1fe89a 878 0 2025-11-06 00:35:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75ddbfb7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75ddbfb7b-znt4c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali91a868ce9d1 [] [] }} ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.465 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.509 [INFO][4846] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" HandleID="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Workload="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.509 [INFO][4846] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" HandleID="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Workload="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b00a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75ddbfb7b-znt4c", "timestamp":"2025-11-06 00:36:09.509724135 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.510 [INFO][4846] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.510 [INFO][4846] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.510 [INFO][4846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.523 [INFO][4846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.533 [INFO][4846] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.542 [INFO][4846] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.546 [INFO][4846] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.549 [INFO][4846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.550 [INFO][4846] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.552 [INFO][4846] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.558 [INFO][4846] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.568 [INFO][4846] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.568 [INFO][4846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" host="localhost" Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.568 [INFO][4846] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:36:09.609073 containerd[1634]: 2025-11-06 00:36:09.568 [INFO][4846] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" HandleID="k8s-pod-network.7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Workload="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.610488 containerd[1634]: 2025-11-06 00:36:09.573 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0", GenerateName:"calico-kube-controllers-75ddbfb7b-", Namespace:"calico-system", SelfLink:"", UID:"30a2a173-30b9-41b2-8ef6-9137cb1fe89a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75ddbfb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75ddbfb7b-znt4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91a868ce9d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:09.610488 containerd[1634]: 2025-11-06 00:36:09.573 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.610488 containerd[1634]: 2025-11-06 00:36:09.573 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91a868ce9d1 ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.610488 containerd[1634]: 2025-11-06 00:36:09.577 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.610488 containerd[1634]: 2025-11-06 00:36:09.583 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0", GenerateName:"calico-kube-controllers-75ddbfb7b-", Namespace:"calico-system", SelfLink:"", UID:"30a2a173-30b9-41b2-8ef6-9137cb1fe89a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75ddbfb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f", Pod:"calico-kube-controllers-75ddbfb7b-znt4c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91a868ce9d1", MAC:"4e:1a:91:59:86:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:09.610488 containerd[1634]: 2025-11-06 00:36:09.604 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" Namespace="calico-system" Pod="calico-kube-controllers-75ddbfb7b-znt4c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75ddbfb7b--znt4c-eth0" Nov 6 00:36:09.649378 containerd[1634]: time="2025-11-06T00:36:09.648692022Z" level=info msg="connecting to shim 7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f" address="unix:///run/containerd/s/bee8d8a8870432162688d2f7bd535f729e21af7fe29fccb9f499210d4fb8108a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:36:09.692927 systemd[1]: Started cri-containerd-7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f.scope - libcontainer container 7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f. Nov 6 00:36:09.711568 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:36:09.760103 containerd[1634]: time="2025-11-06T00:36:09.760013819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75ddbfb7b-znt4c,Uid:30a2a173-30b9-41b2-8ef6-9137cb1fe89a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bce8d8656976682dfb407839fec33f4ce709d2c36235d966c79cd4184c5115f\"" Nov 6 00:36:09.770590 containerd[1634]: time="2025-11-06T00:36:09.770526989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:36:10.172014 containerd[1634]: time="2025-11-06T00:36:10.171935762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:10.173292 containerd[1634]: time="2025-11-06T00:36:10.173172244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:36:10.173292 containerd[1634]: time="2025-11-06T00:36:10.173232450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:36:10.173484 kubelet[2824]: E1106 00:36:10.173422 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:36:10.174161 kubelet[2824]: E1106 00:36:10.173485 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:36:10.174161 kubelet[2824]: E1106 00:36:10.173701 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96qhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:10.175099 kubelet[2824]: E1106 00:36:10.174953 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:36:10.700683 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:51794.service - OpenSSH per-connection server daemon (10.0.0.1:51794). Nov 6 00:36:10.762292 sshd[4913]: Accepted publickey for core from 10.0.0.1 port 51794 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:10.763726 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:10.768340 systemd-logind[1613]: New session 11 of user core. Nov 6 00:36:10.777678 kubelet[2824]: E1106 00:36:10.777156 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:36:10.780208 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:36:10.899989 sshd[4917]: Connection closed by 10.0.0.1 port 51794 Nov 6 00:36:10.900339 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:10.904048 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:51794.service: Deactivated successfully. Nov 6 00:36:10.906247 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:36:10.908269 systemd-logind[1613]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:36:10.909901 systemd-logind[1613]: Removed session 11. Nov 6 00:36:11.281889 systemd-networkd[1529]: cali91a868ce9d1: Gained IPv6LL Nov 6 00:36:12.401954 containerd[1634]: time="2025-11-06T00:36:12.401896569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:36:12.765654 containerd[1634]: time="2025-11-06T00:36:12.765590808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:12.816656 containerd[1634]: time="2025-11-06T00:36:12.816555876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:36:12.816804 containerd[1634]: time="2025-11-06T00:36:12.816648215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:12.816892 kubelet[2824]: E1106 00:36:12.816851 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:12.817228 kubelet[2824]: E1106 00:36:12.816899 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:12.817228 kubelet[2824]: E1106 00:36:12.817037 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsb6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-647c87d985-dskgb_calico-apiserver(58f56521-b0ee-46b1-8476-68ff3e34496b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:12.818244 kubelet[2824]: E1106 00:36:12.818195 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:13.401937 containerd[1634]: time="2025-11-06T00:36:13.401859439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xp8pl,Uid:299ba27c-7f4c-4b4c-bf27-d7e11dc57242,Namespace:calico-system,Attempt:0,}" Nov 6 00:36:13.755561 systemd-networkd[1529]: calicb1f0f89158: Link UP Nov 6 00:36:13.756932 systemd-networkd[1529]: calicb1f0f89158: Gained carrier Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.437 [INFO][4938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xp8pl-eth0 csi-node-driver- calico-system 299ba27c-7f4c-4b4c-bf27-d7e11dc57242 746 0 2025-11-06 00:35:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xp8pl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicb1f0f89158 [] [] }} ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.437 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.461 [INFO][4953] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" HandleID="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Workload="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.461 [INFO][4953] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" HandleID="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Workload="localhost-k8s-csi--node--driver--xp8pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xp8pl", "timestamp":"2025-11-06 00:36:13.461143013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.461 [INFO][4953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.461 [INFO][4953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.461 [INFO][4953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.467 [INFO][4953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.470 [INFO][4953] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.473 [INFO][4953] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.474 [INFO][4953] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.476 [INFO][4953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.476 [INFO][4953] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.477 [INFO][4953] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2 Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.517 [INFO][4953] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.749 [INFO][4953] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.749 [INFO][4953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" host="localhost" Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.749 [INFO][4953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:36:13.776791 containerd[1634]: 2025-11-06 00:36:13.749 [INFO][4953] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" HandleID="k8s-pod-network.4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Workload="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.778056 containerd[1634]: 2025-11-06 00:36:13.753 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xp8pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"299ba27c-7f4c-4b4c-bf27-d7e11dc57242", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xp8pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb1f0f89158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:13.778056 containerd[1634]: 2025-11-06 00:36:13.753 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.778056 containerd[1634]: 2025-11-06 00:36:13.753 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb1f0f89158 ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.778056 containerd[1634]: 2025-11-06 00:36:13.755 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.778056 containerd[1634]: 2025-11-06 00:36:13.758 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xp8pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"299ba27c-7f4c-4b4c-bf27-d7e11dc57242", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2", Pod:"csi-node-driver-xp8pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb1f0f89158", MAC:"aa:b6:14:50:d7:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:36:13.778056 containerd[1634]: 2025-11-06 00:36:13.770 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" Namespace="calico-system" Pod="csi-node-driver-xp8pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--xp8pl-eth0" Nov 6 00:36:13.803676 containerd[1634]: time="2025-11-06T00:36:13.803193292Z" level=info msg="connecting to shim 4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2" address="unix:///run/containerd/s/a49b36b15d7e740f222b891e76cdaa236ff3fc07e2b9747c0c7cb6bf4a60b2f0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:36:13.838793 systemd[1]: Started cri-containerd-4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2.scope - libcontainer container 4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2. Nov 6 00:36:13.851430 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:36:13.867043 containerd[1634]: time="2025-11-06T00:36:13.867002181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xp8pl,Uid:299ba27c-7f4c-4b4c-bf27-d7e11dc57242,Namespace:calico-system,Attempt:0,} returns sandbox id \"4af3fd1c3ddc4d7b8c615f0112fa72a5b2480cf65e9aea7e2d60d7e9aa87d6f2\"" Nov 6 00:36:13.868551 containerd[1634]: time="2025-11-06T00:36:13.868492488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:36:14.262703 containerd[1634]: time="2025-11-06T00:36:14.262629984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:14.267700 containerd[1634]: time="2025-11-06T00:36:14.267654719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:36:14.267767 containerd[1634]: time="2025-11-06T00:36:14.267686701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:36:14.267923 kubelet[2824]: E1106 00:36:14.267878 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:36:14.268246 kubelet[2824]: E1106 00:36:14.267927 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:36:14.268246 kubelet[2824]: E1106 00:36:14.268043 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:14.270173 containerd[1634]: time="2025-11-06T00:36:14.270142734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:36:14.602705 containerd[1634]: time="2025-11-06T00:36:14.602508440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:14.799442 containerd[1634]: time="2025-11-06T00:36:14.799356250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:36:14.799442 containerd[1634]: time="2025-11-06T00:36:14.799433970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:36:14.799952 kubelet[2824]: E1106 00:36:14.799556 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:36:14.799952 kubelet[2824]: E1106 00:36:14.799596 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:36:14.799952 kubelet[2824]: E1106 00:36:14.799878 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:14.800599 containerd[1634]: time="2025-11-06T00:36:14.800562494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:36:14.801132 kubelet[2824]: E1106 00:36:14.801033 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:36:14.995393 systemd-networkd[1529]: calicb1f0f89158: Gained IPv6LL Nov 6 00:36:15.171314 containerd[1634]: time="2025-11-06T00:36:15.171262175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:15.244229 containerd[1634]: time="2025-11-06T00:36:15.244166533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:36:15.244345 containerd[1634]: time="2025-11-06T00:36:15.244200509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:15.244713 kubelet[2824]: E1106 00:36:15.244412 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:15.244713 kubelet[2824]: E1106 00:36:15.244461 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:15.244713 kubelet[2824]: E1106 00:36:15.244611 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spzrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-647c87d985-2h5ss_calico-apiserver(37bfba89-7ef7-48f7-8ad4-1de208225932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:15.246026 kubelet[2824]: E1106 00:36:15.245961 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:15.787034 kubelet[2824]: E1106 00:36:15.786953 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:36:15.921032 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:51808.service - OpenSSH per-connection server daemon (10.0.0.1:51808). Nov 6 00:36:15.980615 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 51808 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:15.981912 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:15.986348 systemd-logind[1613]: New session 12 of user core. Nov 6 00:36:15.992796 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:36:16.155247 sshd[5022]: Connection closed by 10.0.0.1 port 51808 Nov 6 00:36:16.155578 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:16.168742 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:51808.service: Deactivated successfully. Nov 6 00:36:16.170957 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:36:16.171908 systemd-logind[1613]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:36:16.174997 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:51814.service - OpenSSH per-connection server daemon (10.0.0.1:51814). Nov 6 00:36:16.175704 systemd-logind[1613]: Removed session 12. Nov 6 00:36:16.241272 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 51814 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:16.243247 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:16.247881 systemd-logind[1613]: New session 13 of user core. Nov 6 00:36:16.257940 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:36:16.399942 sshd[5039]: Connection closed by 10.0.0.1 port 51814 Nov 6 00:36:16.400230 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:16.407872 containerd[1634]: time="2025-11-06T00:36:16.407760094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:36:16.413952 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:51814.service: Deactivated successfully. Nov 6 00:36:16.416124 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:36:16.420335 systemd-logind[1613]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:36:16.425787 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:51818.service - OpenSSH per-connection server daemon (10.0.0.1:51818). Nov 6 00:36:16.428929 systemd-logind[1613]: Removed session 13. Nov 6 00:36:16.492041 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 51818 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:16.493310 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:16.499095 systemd-logind[1613]: New session 14 of user core. Nov 6 00:36:16.503786 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:36:16.616028 sshd[5053]: Connection closed by 10.0.0.1 port 51818 Nov 6 00:36:16.618250 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:16.623773 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:51818.service: Deactivated successfully. Nov 6 00:36:16.626081 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:36:16.626925 systemd-logind[1613]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:36:16.628141 systemd-logind[1613]: Removed session 14. Nov 6 00:36:16.748026 containerd[1634]: time="2025-11-06T00:36:16.747958097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:16.749146 containerd[1634]: time="2025-11-06T00:36:16.749103620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:36:16.749196 containerd[1634]: time="2025-11-06T00:36:16.749181801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:36:16.749416 kubelet[2824]: E1106 00:36:16.749354 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:36:16.749503 kubelet[2824]: E1106 00:36:16.749423 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:36:16.750684 kubelet[2824]: E1106 00:36:16.750604 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f694ace724154c5498c9b1960140a79f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg62n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647cc965d5-59nsd_calico-system(48c54e02-677b-4620-9d87-c389d0553835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:16.752865 containerd[1634]: time="2025-11-06T00:36:16.752584325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:36:17.080819 containerd[1634]: time="2025-11-06T00:36:17.080616200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:17.170796 containerd[1634]: time="2025-11-06T00:36:17.170713316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:36:17.170796 containerd[1634]: time="2025-11-06T00:36:17.170765547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:36:17.170976 kubelet[2824]: E1106 00:36:17.170942 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:36:17.171303 kubelet[2824]: E1106 00:36:17.170988 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:36:17.171303 kubelet[2824]: E1106 00:36:17.171115 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg62n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647cc965d5-59nsd_calico-system(48c54e02-677b-4620-9d87-c389d0553835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:17.172322 kubelet[2824]: E1106 00:36:17.172276 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647cc965d5-59nsd" podUID="48c54e02-677b-4620-9d87-c389d0553835" Nov 6 00:36:17.401260 containerd[1634]: time="2025-11-06T00:36:17.401130213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:36:18.035396 containerd[1634]: time="2025-11-06T00:36:18.035336329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:18.036526 containerd[1634]: time="2025-11-06T00:36:18.036474075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:36:18.036526 containerd[1634]: time="2025-11-06T00:36:18.036540893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:18.036783 kubelet[2824]: E1106 00:36:18.036710 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:36:18.036783 kubelet[2824]: E1106 00:36:18.036763 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:36:18.037011 kubelet[2824]: E1106 00:36:18.036960 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2s9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c5bfl_calico-system(cb6ef055-21f2-4f63-9dca-424807e07ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:18.038222 kubelet[2824]: E1106 00:36:18.038178 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:36:21.631003 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:46570.service - OpenSSH per-connection server daemon (10.0.0.1:46570). Nov 6 00:36:21.696075 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 46570 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:21.697888 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:21.702033 systemd-logind[1613]: New session 15 of user core. Nov 6 00:36:21.708770 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:36:21.846579 sshd[5079]: Connection closed by 10.0.0.1 port 46570 Nov 6 00:36:21.846922 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:21.850709 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:46570.service: Deactivated successfully. Nov 6 00:36:21.852810 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:36:21.854335 systemd-logind[1613]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:36:21.855956 systemd-logind[1613]: Removed session 15. Nov 6 00:36:24.402264 kubelet[2824]: E1106 00:36:24.402025 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:24.403248 kubelet[2824]: E1106 00:36:24.403195 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:24.403489 containerd[1634]: time="2025-11-06T00:36:24.403454993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:36:24.755487 containerd[1634]: time="2025-11-06T00:36:24.755419283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:24.781142 containerd[1634]: time="2025-11-06T00:36:24.781065035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:36:24.781388 containerd[1634]: time="2025-11-06T00:36:24.781145951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:36:24.781419 kubelet[2824]: E1106 00:36:24.781378 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:36:24.781462 kubelet[2824]: E1106 00:36:24.781436 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:36:24.781685 kubelet[2824]: E1106 00:36:24.781580 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96qhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:24.782882 kubelet[2824]: E1106 00:36:24.782834 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:36:25.401045 kubelet[2824]: E1106 00:36:25.400995 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:26.864921 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:46578.service - OpenSSH per-connection server daemon (10.0.0.1:46578). Nov 6 00:36:26.939258 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 46578 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:26.940750 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:26.945192 systemd-logind[1613]: New session 16 of user core. Nov 6 00:36:26.955798 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:36:27.096392 sshd[5101]: Connection closed by 10.0.0.1 port 46578 Nov 6 00:36:27.096914 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:27.102848 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:46578.service: Deactivated successfully. Nov 6 00:36:27.105347 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:36:27.106792 systemd-logind[1613]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:36:27.109117 systemd-logind[1613]: Removed session 16. Nov 6 00:36:27.402000 kubelet[2824]: E1106 00:36:27.401938 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:28.799256 containerd[1634]: time="2025-11-06T00:36:28.799126901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\" id:\"a8fb31b6d24b7f58f28d87dba18af5e80da939c3a61b2ee6baa3f2c5164bf348\" pid:5124 exited_at:{seconds:1762389388 nanos:798779635}" Nov 6 00:36:28.803546 kubelet[2824]: E1106 00:36:28.803513 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:29.401983 containerd[1634]: time="2025-11-06T00:36:29.401924317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:36:29.999043 containerd[1634]: time="2025-11-06T00:36:29.997959638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:29.999533 containerd[1634]: time="2025-11-06T00:36:29.999059255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:36:29.999533 containerd[1634]: time="2025-11-06T00:36:29.999134029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:36:29.999598 kubelet[2824]: E1106 00:36:29.999338 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:36:29.999598 kubelet[2824]: E1106 00:36:29.999406 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:36:30.000433 kubelet[2824]: E1106 00:36:29.999680 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:30.002430 containerd[1634]: time="2025-11-06T00:36:30.002399522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:36:30.347765 containerd[1634]: time="2025-11-06T00:36:30.347062004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:30.350462 containerd[1634]: time="2025-11-06T00:36:30.350396218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:36:30.350584 containerd[1634]: time="2025-11-06T00:36:30.350516037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:36:30.350859 kubelet[2824]: E1106 00:36:30.350792 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:36:30.350983 kubelet[2824]: E1106 00:36:30.350872 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:36:30.352662 kubelet[2824]: E1106 00:36:30.351020 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:30.352662 kubelet[2824]: E1106 00:36:30.352593 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:36:31.402511 kubelet[2824]: E1106 00:36:31.402421 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647cc965d5-59nsd" podUID="48c54e02-677b-4620-9d87-c389d0553835" Nov 6 00:36:32.112979 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:49198.service - OpenSSH per-connection server daemon (10.0.0.1:49198). Nov 6 00:36:32.188832 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 49198 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:32.191535 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:32.197792 systemd-logind[1613]: New session 17 of user core. Nov 6 00:36:32.207838 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:36:32.346257 sshd[5142]: Connection closed by 10.0.0.1 port 49198 Nov 6 00:36:32.346575 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:32.351880 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:49198.service: Deactivated successfully. Nov 6 00:36:32.354524 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:36:32.355391 systemd-logind[1613]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:36:32.357005 systemd-logind[1613]: Removed session 17. Nov 6 00:36:32.401852 kubelet[2824]: E1106 00:36:32.401699 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:36:37.367556 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:49208.service - OpenSSH per-connection server daemon (10.0.0.1:49208). Nov 6 00:36:37.404103 containerd[1634]: time="2025-11-06T00:36:37.403523307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:36:37.405747 kubelet[2824]: E1106 00:36:37.403952 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:37.441442 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 49208 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:37.443254 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:37.448285 systemd-logind[1613]: New session 18 of user core. Nov 6 00:36:37.461809 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:36:37.598851 sshd[5160]: Connection closed by 10.0.0.1 port 49208 Nov 6 00:36:37.599197 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:37.604788 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:49208.service: Deactivated successfully. Nov 6 00:36:37.607416 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:36:37.608309 systemd-logind[1613]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:36:37.609852 systemd-logind[1613]: Removed session 18. Nov 6 00:36:37.729259 containerd[1634]: time="2025-11-06T00:36:37.729078916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:37.840026 containerd[1634]: time="2025-11-06T00:36:37.839935364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:37.840026 containerd[1634]: time="2025-11-06T00:36:37.839992443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:36:37.840240 kubelet[2824]: E1106 00:36:37.840198 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:37.840294 kubelet[2824]: E1106 00:36:37.840244 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:37.840448 kubelet[2824]: E1106 00:36:37.840377 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsb6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-647c87d985-dskgb_calico-apiserver(58f56521-b0ee-46b1-8476-68ff3e34496b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:37.841651 kubelet[2824]: E1106 00:36:37.841585 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:40.401893 kubelet[2824]: E1106 00:36:40.401821 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:40.402753 kubelet[2824]: E1106 00:36:40.402721 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:36:41.402994 containerd[1634]: time="2025-11-06T00:36:41.402930209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:36:41.403411 kubelet[2824]: E1106 00:36:41.403046 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:36:41.731881 containerd[1634]: time="2025-11-06T00:36:41.731810105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:41.927501 containerd[1634]: time="2025-11-06T00:36:41.927424681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:36:41.927501 containerd[1634]: time="2025-11-06T00:36:41.927474105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:41.927757 kubelet[2824]: E1106 00:36:41.927626 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:41.927757 kubelet[2824]: E1106 00:36:41.927700 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:36:41.927925 kubelet[2824]: E1106 00:36:41.927882 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spzrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-647c87d985-2h5ss_calico-apiserver(37bfba89-7ef7-48f7-8ad4-1de208225932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:41.929121 kubelet[2824]: E1106 00:36:41.929061 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:42.617091 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:50128.service - OpenSSH per-connection server daemon (10.0.0.1:50128). Nov 6 00:36:42.678108 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 50128 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:42.680192 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:42.685213 systemd-logind[1613]: New session 19 of user core. Nov 6 00:36:42.692776 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:36:42.841164 sshd[5182]: Connection closed by 10.0.0.1 port 50128 Nov 6 00:36:42.841893 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:42.854436 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:50128.service: Deactivated successfully. Nov 6 00:36:42.857109 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:36:42.858738 systemd-logind[1613]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:36:42.863446 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:50134.service - OpenSSH per-connection server daemon (10.0.0.1:50134). Nov 6 00:36:42.866275 systemd-logind[1613]: Removed session 19. Nov 6 00:36:42.928136 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 50134 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:42.930005 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:42.935602 systemd-logind[1613]: New session 20 of user core. Nov 6 00:36:42.940850 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:36:43.252606 sshd[5199]: Connection closed by 10.0.0.1 port 50134 Nov 6 00:36:43.253092 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:43.262526 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:50134.service: Deactivated successfully. Nov 6 00:36:43.264701 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:36:43.265543 systemd-logind[1613]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:36:43.268468 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:50146.service - OpenSSH per-connection server daemon (10.0.0.1:50146). Nov 6 00:36:43.269585 systemd-logind[1613]: Removed session 20. Nov 6 00:36:43.328360 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 50146 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:43.329936 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:43.334516 systemd-logind[1613]: New session 21 of user core. Nov 6 00:36:43.341762 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:36:43.402333 containerd[1634]: time="2025-11-06T00:36:43.401911353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:36:43.729161 containerd[1634]: time="2025-11-06T00:36:43.729108712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:43.734887 containerd[1634]: time="2025-11-06T00:36:43.734843793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:36:43.734939 containerd[1634]: time="2025-11-06T00:36:43.734907745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:36:43.735105 kubelet[2824]: E1106 00:36:43.735065 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:36:43.735571 kubelet[2824]: E1106 00:36:43.735116 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:36:43.735571 kubelet[2824]: E1106 00:36:43.735272 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2s9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c5bfl_calico-system(cb6ef055-21f2-4f63-9dca-424807e07ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:43.736543 kubelet[2824]: E1106 00:36:43.736484 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:36:43.983493 sshd[5215]: Connection closed by 10.0.0.1 port 50146 Nov 6 00:36:43.987863 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:44.000807 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:50146.service: Deactivated successfully. Nov 6 00:36:44.004487 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:36:44.006840 systemd-logind[1613]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:36:44.010067 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:50156.service - OpenSSH per-connection server daemon (10.0.0.1:50156). Nov 6 00:36:44.012627 systemd-logind[1613]: Removed session 21. Nov 6 00:36:44.074272 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 50156 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:44.075729 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:44.080071 systemd-logind[1613]: New session 22 of user core. Nov 6 00:36:44.090925 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:36:44.423936 sshd[5240]: Connection closed by 10.0.0.1 port 50156 Nov 6 00:36:44.425675 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:44.435951 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:50156.service: Deactivated successfully. Nov 6 00:36:44.438821 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:36:44.441479 systemd-logind[1613]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:36:44.445940 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:50162.service - OpenSSH per-connection server daemon (10.0.0.1:50162). Nov 6 00:36:44.449384 systemd-logind[1613]: Removed session 22. Nov 6 00:36:44.500188 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 50162 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:44.502613 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:44.509194 systemd-logind[1613]: New session 23 of user core. Nov 6 00:36:44.514789 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:36:44.645370 sshd[5254]: Connection closed by 10.0.0.1 port 50162 Nov 6 00:36:44.645858 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:44.652751 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:50162.service: Deactivated successfully. Nov 6 00:36:44.655432 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:36:44.656525 systemd-logind[1613]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:36:44.658457 systemd-logind[1613]: Removed session 23. Nov 6 00:36:45.402253 containerd[1634]: time="2025-11-06T00:36:45.402200365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:36:45.730035 containerd[1634]: time="2025-11-06T00:36:45.729793715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:45.831023 containerd[1634]: time="2025-11-06T00:36:45.830880122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:36:45.831023 containerd[1634]: time="2025-11-06T00:36:45.830924486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:36:45.831200 kubelet[2824]: E1106 00:36:45.831123 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:36:45.831200 kubelet[2824]: E1106 00:36:45.831186 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:36:45.831583 kubelet[2824]: E1106 00:36:45.831344 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f694ace724154c5498c9b1960140a79f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg62n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647cc965d5-59nsd_calico-system(48c54e02-677b-4620-9d87-c389d0553835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:45.833291 containerd[1634]: time="2025-11-06T00:36:45.833240434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:36:46.350994 containerd[1634]: time="2025-11-06T00:36:46.350935024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:46.352082 containerd[1634]: time="2025-11-06T00:36:46.351963129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:36:46.352082 containerd[1634]: time="2025-11-06T00:36:46.351974010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:36:46.352278 kubelet[2824]: E1106 00:36:46.352239 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:36:46.352340 kubelet[2824]: E1106 00:36:46.352290 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:36:46.352444 kubelet[2824]: E1106 00:36:46.352399 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg62n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647cc965d5-59nsd_calico-system(48c54e02-677b-4620-9d87-c389d0553835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:46.353931 kubelet[2824]: E1106 00:36:46.353852 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647cc965d5-59nsd" podUID="48c54e02-677b-4620-9d87-c389d0553835" Nov 6 00:36:47.400753 kubelet[2824]: E1106 00:36:47.400708 2824 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:36:49.663224 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:50176.service - OpenSSH per-connection server daemon (10.0.0.1:50176). Nov 6 00:36:49.733088 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 50176 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:49.734967 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:49.739601 systemd-logind[1613]: New session 24 of user core. Nov 6 00:36:49.745774 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:36:49.874720 sshd[5274]: Connection closed by 10.0.0.1 port 50176 Nov 6 00:36:49.874989 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:49.880259 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:50176.service: Deactivated successfully. Nov 6 00:36:49.882549 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:36:49.883446 systemd-logind[1613]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:36:49.885173 systemd-logind[1613]: Removed session 24. Nov 6 00:36:50.403347 kubelet[2824]: E1106 00:36:50.403170 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-dskgb" podUID="58f56521-b0ee-46b1-8476-68ff3e34496b" Nov 6 00:36:52.402914 containerd[1634]: time="2025-11-06T00:36:52.402801494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:36:52.722054 containerd[1634]: time="2025-11-06T00:36:52.721983789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:52.723302 containerd[1634]: time="2025-11-06T00:36:52.723262478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:36:52.723419 containerd[1634]: time="2025-11-06T00:36:52.723292574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:36:52.723536 kubelet[2824]: E1106 00:36:52.723488 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:36:52.723916 kubelet[2824]: E1106 00:36:52.723548 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:36:52.723916 kubelet[2824]: E1106 00:36:52.723711 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:52.726091 containerd[1634]: time="2025-11-06T00:36:52.726069869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:36:53.032781 containerd[1634]: time="2025-11-06T00:36:53.032598788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:53.033885 containerd[1634]: time="2025-11-06T00:36:53.033837650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:36:53.033885 containerd[1634]: time="2025-11-06T00:36:53.033879249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:36:53.034199 kubelet[2824]: E1106 00:36:53.034137 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:36:53.034274 kubelet[2824]: E1106 00:36:53.034197 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:36:53.034403 kubelet[2824]: E1106 00:36:53.034349 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xp8pl_calico-system(299ba27c-7f4c-4b4c-bf27-d7e11dc57242): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:53.035572 kubelet[2824]: E1106 00:36:53.035528 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xp8pl" podUID="299ba27c-7f4c-4b4c-bf27-d7e11dc57242" Nov 6 00:36:54.402051 containerd[1634]: time="2025-11-06T00:36:54.401978313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:36:54.741189 containerd[1634]: time="2025-11-06T00:36:54.741126212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:36:54.742456 containerd[1634]: time="2025-11-06T00:36:54.742403547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:36:54.742517 containerd[1634]: time="2025-11-06T00:36:54.742455425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:36:54.742651 kubelet[2824]: E1106 00:36:54.742597 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:36:54.743065 kubelet[2824]: E1106 00:36:54.742661 2824 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:36:54.743065 kubelet[2824]: E1106 00:36:54.742795 2824 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96qhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-75ddbfb7b-znt4c_calico-system(30a2a173-30b9-41b2-8ef6-9137cb1fe89a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:36:54.744729 kubelet[2824]: E1106 00:36:54.744700 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75ddbfb7b-znt4c" podUID="30a2a173-30b9-41b2-8ef6-9137cb1fe89a" Nov 6 00:36:54.888620 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:38798.service - OpenSSH per-connection server daemon (10.0.0.1:38798). Nov 6 00:36:54.959664 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 38798 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:36:54.961211 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:36:54.965415 systemd-logind[1613]: New session 25 of user core. Nov 6 00:36:54.973755 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:36:55.122012 sshd[5293]: Connection closed by 10.0.0.1 port 38798 Nov 6 00:36:55.124624 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Nov 6 00:36:55.128744 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:38798.service: Deactivated successfully. Nov 6 00:36:55.131398 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:36:55.132306 systemd-logind[1613]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:36:55.134232 systemd-logind[1613]: Removed session 25. Nov 6 00:36:55.404759 kubelet[2824]: E1106 00:36:55.403910 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647c87d985-2h5ss" podUID="37bfba89-7ef7-48f7-8ad4-1de208225932" Nov 6 00:36:56.401547 kubelet[2824]: E1106 00:36:56.401491 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c5bfl" podUID="cb6ef055-21f2-4f63-9dca-424807e07ebf" Nov 6 00:36:58.799686 containerd[1634]: time="2025-11-06T00:36:58.799219543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65234d042b1998c0205835bdb78a9255105b49184baa46398249ee98616d0a99\" id:\"869b813e606c8c31aab4d53dcaea582f62f89f75a67ef7f87a2b64c928d129af\" pid:5317 exited_at:{seconds:1762389418 nanos:798684209}" Nov 6 00:37:00.134674 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Nov 6 00:37:00.195624 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:41dAWBXiXRWpXvRSHJlgNeUH9KMYZLa9oIJLeuLaTAw Nov 6 00:37:00.197318 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:37:00.202191 systemd-logind[1613]: New session 26 of user core. Nov 6 00:37:00.209763 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:37:00.349570 sshd[5336]: Connection closed by 10.0.0.1 port 45122 Nov 6 00:37:00.349962 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Nov 6 00:37:00.355778 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:45122.service: Deactivated successfully. Nov 6 00:37:00.358731 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:37:00.359770 systemd-logind[1613]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:37:00.361707 systemd-logind[1613]: Removed session 26. Nov 6 00:37:00.402968 kubelet[2824]: E1106 00:37:00.402816 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647cc965d5-59nsd" podUID="48c54e02-677b-4620-9d87-c389d0553835"